Mar 12 16:02:29 crc systemd[1]: Starting Kubernetes Kubelet... Mar 12 16:02:29 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:29 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 16:02:30 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 12 16:02:31 crc kubenswrapper[4687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 16:02:31 crc kubenswrapper[4687]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 16:02:31 crc kubenswrapper[4687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 16:02:31 crc kubenswrapper[4687]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 16:02:31 crc kubenswrapper[4687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 16:02:31 crc kubenswrapper[4687]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.484650 4687 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493332 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493400 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493412 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493421 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493429 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493436 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493444 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493452 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493460 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493468 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493475 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493484 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493492 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493500 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493508 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493516 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493524 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493534 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493544 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493554 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493565 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493575 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493584 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493593 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493602 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493610 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493617 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493627 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493637 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493646 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493657 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493665 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493673 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493681 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493688 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493698 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493706 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493713 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493722 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493731 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493739 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493748 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493757 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493764 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493772 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493779 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493787 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493795 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493805 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493815 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493824 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493832 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493840 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493848 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493857 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493866 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493876 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493884 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493892 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493900 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493907 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493915 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493922 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493930 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493937 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493945 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493953 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493960 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493970 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493979 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.493988 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495073 4687 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495098 4687 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495112 4687 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495124 4687 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495136 4687 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495146 4687 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495157 4687 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495167 4687 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495177 4687 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495186 4687 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495196 4687 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495207 4687 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495216 4687 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495227 4687 flags.go:64] FLAG: --cgroup-root="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495236 4687 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495245 4687 flags.go:64] FLAG: --client-ca-file="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495255 4687 flags.go:64] FLAG: --cloud-config="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495264 4687 flags.go:64] FLAG: --cloud-provider="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495273 4687 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495284 4687 flags.go:64] FLAG: --cluster-domain="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495293 4687 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495303 4687 flags.go:64] FLAG: --config-dir="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495312 4687 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495321 4687 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495332 4687 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495341 4687 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495350 4687 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495387 4687 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495397 4687 flags.go:64] FLAG: --contention-profiling="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495406 4687 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495415 4687 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495424 4687 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495433 4687 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495443 4687 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495453 4687 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495462 4687 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495471 4687 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495480 4687 flags.go:64] FLAG: --enable-server="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495513 4687 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495524 4687 flags.go:64] FLAG: --event-burst="100" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495533 4687 flags.go:64] FLAG: --event-qps="50" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495545 4687 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495556 4687 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495567 4687 flags.go:64] FLAG: --eviction-hard="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495578 4687 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495589 4687 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495598 4687 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495608 4687 flags.go:64] FLAG: --eviction-soft="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495617 4687 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495627 4687 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495636 4687 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495645 4687 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495654 4687 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495663 4687 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495672 4687 flags.go:64] FLAG: --feature-gates="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495682 4687 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495692 4687 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495701 4687 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495710 4687 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495719 4687 flags.go:64] FLAG: --healthz-port="10248" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495728 4687 flags.go:64] FLAG: --help="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495737 4687 flags.go:64] FLAG: --hostname-override="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495746 4687 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495755 4687 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495764 4687 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495773 4687 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495781 4687 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495791 4687 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495799 4687 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495808 4687 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495818 4687 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495826 4687 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495836 4687 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495845 4687 flags.go:64] FLAG: --kube-reserved="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495855 4687 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495863 4687 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495872 4687 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495881 4687 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495890 4687 flags.go:64] FLAG: --lock-file="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495899 4687 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495908 4687 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495917 4687 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495931 4687 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495940 4687 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495948 4687 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495959 4687 flags.go:64] FLAG: --logging-format="text" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495969 4687 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495978 4687 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495987 4687 flags.go:64] FLAG: --manifest-url="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.495996 4687 flags.go:64] FLAG: --manifest-url-header="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496007 4687 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496016 4687 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496027 4687 flags.go:64] FLAG: --max-pods="110" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496037 4687 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496046 4687 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496054 4687 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496063 4687 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496072 4687 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496081 4687 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496090 4687 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496109 4687 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496118 4687 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496128 4687 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496138 4687 flags.go:64] FLAG: --pod-cidr="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496147 4687 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496159 4687 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496167 4687 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496176 4687 flags.go:64] FLAG: --pods-per-core="0" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496186 4687 flags.go:64] FLAG: --port="10250" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496194 4687 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496203 4687 flags.go:64] FLAG: --provider-id="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496212 4687 flags.go:64] FLAG: --qos-reserved="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496221 4687 flags.go:64] FLAG: --read-only-port="10255" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496230 4687 flags.go:64] FLAG: --register-node="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496239 4687 flags.go:64] FLAG: --register-schedulable="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496247 4687 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496268 4687 flags.go:64] FLAG: --registry-burst="10" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496277 4687 flags.go:64] FLAG: --registry-qps="5" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496287 4687 flags.go:64] FLAG: --reserved-cpus="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496295 4687 flags.go:64] FLAG: --reserved-memory="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496306 4687 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496315 4687 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496324 4687 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496333 4687 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496342 4687 flags.go:64] FLAG: --runonce="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496351 4687 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496388 4687 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496398 4687 flags.go:64] FLAG: --seccomp-default="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496406 4687 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496416 4687 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496425 4687 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496434 4687 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496442 4687 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496451 4687 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496461 4687 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496469 4687 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496478 4687 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496486 4687 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496495 4687 flags.go:64] FLAG: --system-cgroups="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496504 4687 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496526 4687 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496534 4687 flags.go:64] FLAG: --tls-cert-file="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496543 4687 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496554 4687 flags.go:64] FLAG: --tls-min-version="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496568 4687 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496579 4687 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496590 4687 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496646 4687 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496657 4687 flags.go:64] FLAG: --v="2" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496670 4687 flags.go:64] FLAG: --version="false" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496681 4687 flags.go:64] FLAG: --vmodule="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496692 4687 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.496701 4687 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.497122 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.497139 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.497149 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.497159 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499053 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499560 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499581 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499594 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499937 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499956 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499967 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499976 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499985 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.499994 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500004 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500014 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500023 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500034 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500043 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500052 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500061 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500071 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500080 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500089 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500098 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500107 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500115 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500125 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500134 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500144 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500153 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500162 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500171 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500180 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500189 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500201 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500215 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500226 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500235 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500246 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500255 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500264 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500273 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500282 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500291 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500301 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500310 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500319 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500328 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500337 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500347 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500394 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500407 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500418 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500430 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500442 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500452 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500462 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500473 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500485 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500496 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500506 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500514 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500524 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500533 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500541 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500550 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500559 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500568 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500577 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.500586 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.500603 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.512295 4687 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.512349 4687 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512562 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512585 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512604 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512617 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512629 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512642 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512653 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512666 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512678 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512689 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512700 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512711 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512722 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512732 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512747 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512761 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512774 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512787 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512798 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512809 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512820 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512831 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512843 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512858 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512873 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512884 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512895 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512907 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512920 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512931 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512944 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512957 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512972 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512986 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.512998 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513009 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513020 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513031 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513042 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513053 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513064 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513075 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513086 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513097 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513107 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513118 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513131 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513141 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513153 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513165 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513175 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513187 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513197 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513208 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513219 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513230 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513241 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513253 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513263 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513274 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513285 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513296 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513307 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513318 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513329 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513340 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513353 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513403 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513414 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513426 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513438 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.513456 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513804 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513824 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513837 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513849 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513860 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513870 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513882 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513895 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513906 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513917 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513928 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513939 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513950 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513962 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513973 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513984 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.513994 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514004 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514016 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514027 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514037 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514048 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514058 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514069 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514080 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514091 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514102 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514112 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514125 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514136 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514147 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514158 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514168 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514179 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514195 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514211 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514225 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514238 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514252 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514267 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514282 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514296 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514310 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514321 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514333 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514346 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514393 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514407 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514419 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514431 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514442 4687 feature_gate.go:330] unrecognized feature gate: Example Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514453 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514465 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514476 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514487 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514498 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514510 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514521 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514533 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514544 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514555 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514566 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514577 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514588 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514599 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514610 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514622 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514633 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514644 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514655 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.514665 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.514682 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.515062 4687 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.521444 4687 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.526284 4687 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.526500 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.528768 4687 server.go:997] "Starting client certificate rotation" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.528828 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.529109 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.556632 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.559928 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.561322 4687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.575058 4687 log.go:25] "Validated CRI v1 runtime API" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.616270 4687 log.go:25] "Validated CRI v1 image API" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.618957 4687 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.623342 4687 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-12-15-53-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.623438 4687 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.646223 4687 manager.go:217] Machine: {Timestamp:2026-03-12 16:02:31.643217312 +0000 UTC m=+0.607179696 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4881b7f2-3483-4763-9575-0355a3ee692e BootID:8c6bece5-19ef-4b03-9eea-64676974d44f Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d5:67:d8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d5:67:d8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:93:5b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:35:0c:f4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:84:c7:08 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a4:54:55 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ba:c3:76:e2:cb:cd Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:bb:9c:eb:d3:90 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.646610 4687 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.646931 4687 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.647396 4687 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.647594 4687 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.647637 4687 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.647894 4687 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.647908 4687 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.648479 4687 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.648512 4687 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.648733 4687 state_mem.go:36] "Initialized new in-memory state store" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.648843 4687 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.652829 4687 kubelet.go:418] "Attempting to sync node with API server" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.652859 4687 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.652888 4687 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.652907 4687 kubelet.go:324] "Adding apiserver pod source" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.652923 4687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.657509 4687 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.658507 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.659032 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.659058 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.659134 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.659201 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.661063 4687 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663403 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663440 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663453 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663461 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663476 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663485 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663494 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663509 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663519 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663529 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663542 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.663554 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.664801 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.665311 4687 server.go:1280] "Started kubelet" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.665512 4687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.665852 4687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.666117 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.666622 4687 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 16:02:31 crc systemd[1]: Started Kubernetes Kubelet. Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.669418 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.669815 4687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.670019 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.670076 4687 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.670085 4687 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.670191 4687 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.678990 4687 factory.go:55] Registering systemd factory Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.679065 4687 factory.go:221] Registration of the systemd container factory successfully Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.679122 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.678982 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.679282 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.679546 4687 server.go:460] "Adding debug handlers to kubelet server" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.680638 4687 factory.go:153] Registering CRI-O factory Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.680661 4687 factory.go:221] Registration of the crio container factory successfully Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.679828 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c23780512845e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,LastTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.681083 4687 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.681233 4687 factory.go:103] Registering Raw factory Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.681268 4687 manager.go:1196] Started watching for new ooms in manager Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.683736 4687 manager.go:319] Starting recovery of all containers Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688687 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688745 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688767 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688786 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688810 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688831 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688850 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688875 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688904 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688931 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688959 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.688985 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689011 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689040 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689071 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689099 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689123 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689150 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689175 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689201 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689228 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689253 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689279 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689304 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689403 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689435 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689469 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689499 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689525 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689553 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689583 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689609 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689636 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689662 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689688 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689714 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689741 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689767 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689793 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689819 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689845 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689871 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689897 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689922 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689948 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.689973 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.690030 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.690057 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.690084 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.690112 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.690136 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.690168 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.693891 4687 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.693967 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.693999 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694058 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694081 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694102 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694123 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694144 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694163 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694182 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694203 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694223 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694243 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694261 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694280 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694302 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694322 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694399 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694429 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694454 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694472 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694490 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694508 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694528 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694548 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694567 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694590 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694610 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694631 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694649 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694668 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694689 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694709 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694728 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694746 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694767 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694786 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694806 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694825 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694845 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694863 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694882 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694904 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694923 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694943 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694963 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.694983 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695003 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695022 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695040 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695059 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695077 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695095 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695125 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695148 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695169 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695192 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695214 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695238 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695261 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695300 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695323 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695346 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695426 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695450 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695470 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695489 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695509 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695528 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695546 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695564 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695583 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695602 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695622 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695643 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695661 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695681 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695700 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695719 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695738 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695757 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695776 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695796 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695816 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695834 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695854 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695873 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695891 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695910 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695929 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695948 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695969 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.695987 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696006 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696026 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696046 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696064 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696083 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696103 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696123 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696142 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696161 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696179 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696198 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696216 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696235 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696259 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696278 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696297 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696315 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696334 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696355 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696416 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696435 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696454 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696473 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696491 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696510 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696556 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696585 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696604 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696622 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696650 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696670 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696688 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696707 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696731 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696752 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696773 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696793 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696814 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696832 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696851 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696873 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696893 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696914 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696933 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696953 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696973 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.696992 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697013 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697032 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697051 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697071 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697091 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697109 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697128 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697148 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697170 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697190 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697210 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697234 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697254 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697273 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697423 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697449 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697468 4687 reconstruct.go:97] "Volume reconstruction finished" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.697482 4687 reconciler.go:26] "Reconciler: start to sync state" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.703699 4687 manager.go:324] Recovery completed Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.714448 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.715742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.715797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.715807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.716769 4687 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.716784 4687 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.716803 4687 state_mem.go:36] "Initialized new in-memory state store" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.729195 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.730549 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.731430 4687 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.731564 4687 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.731712 4687 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 16:02:31 crc kubenswrapper[4687]: W0312 16:02:31.733223 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.733323 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.739483 4687 policy_none.go:49] "None policy: Start" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.740950 4687 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.740992 4687 state_mem.go:35] "Initializing new in-memory state store" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.771085 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.783217 4687 manager.go:334] "Starting Device Plugin manager" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.783461 4687 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.783479 4687 server.go:79] "Starting device plugin registration server" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.783845 4687 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.783861 4687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.784013 4687 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.784077 4687 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.784083 4687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.793902 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.832659 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.832755 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.833663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.833700 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.833709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.833830 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.833977 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.834011 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.834796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.834827 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.834836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.834922 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835131 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835175 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835391 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835421 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.835748 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.836007 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.836101 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837457 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.837766 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838119 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838142 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838382 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838520 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838561 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.838836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.839045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.839067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.839075 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.879700 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.884920 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.885848 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.885900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.885917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.885947 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:31 crc kubenswrapper[4687]: E0312 16:02:31.886422 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899197 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899234 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899321 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899389 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899420 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899439 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899465 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899510 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899551 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899623 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:31 crc kubenswrapper[4687]: I0312 16:02:31.899683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001038 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001177 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001209 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001347 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001343 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001450 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001462 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001483 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001550 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001559 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001669 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.001737 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.086943 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.088845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.088955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.088977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.089061 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:32 crc kubenswrapper[4687]: E0312 16:02:32.090022 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.173421 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.179554 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.204318 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.220170 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2e1deb1a2248a0b0be406742c77ac735087b2cbabe65ff242c6260990ec40deb WatchSource:0}: Error finding container 2e1deb1a2248a0b0be406742c77ac735087b2cbabe65ff242c6260990ec40deb: Status 404 returned error can't find the container with id 2e1deb1a2248a0b0be406742c77ac735087b2cbabe65ff242c6260990ec40deb Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.220931 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d5c08eec5018c061dcdfa98256b3500954703e36bce72a650c02a15be5d494b4 WatchSource:0}: Error finding container d5c08eec5018c061dcdfa98256b3500954703e36bce72a650c02a15be5d494b4: Status 404 returned error can't find the container with id d5c08eec5018c061dcdfa98256b3500954703e36bce72a650c02a15be5d494b4 Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.224243 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.227888 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.229019 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e9bdb1899a8d8db6f06e562d3e14fc160ffe0834fc61dfe8080114c438f79c43 WatchSource:0}: Error finding container e9bdb1899a8d8db6f06e562d3e14fc160ffe0834fc61dfe8080114c438f79c43: Status 404 returned error can't find the container with id e9bdb1899a8d8db6f06e562d3e14fc160ffe0834fc61dfe8080114c438f79c43 Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.242561 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-74e97c3a857178a31a41c1d4f6162b5d8ead0d19e61e795438da187251e3f663 WatchSource:0}: Error finding container 74e97c3a857178a31a41c1d4f6162b5d8ead0d19e61e795438da187251e3f663: Status 404 returned error can't find the container with id 74e97c3a857178a31a41c1d4f6162b5d8ead0d19e61e795438da187251e3f663 Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.246528 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ca90d02c3908e7667ec72fdf58793ac189af3e62675fabaa3b1b3da1a17b8d5a WatchSource:0}: Error finding container ca90d02c3908e7667ec72fdf58793ac189af3e62675fabaa3b1b3da1a17b8d5a: Status 404 returned error can't find the container with id ca90d02c3908e7667ec72fdf58793ac189af3e62675fabaa3b1b3da1a17b8d5a Mar 12 16:02:32 crc kubenswrapper[4687]: E0312 16:02:32.281011 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.490938 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.492248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.492313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.492334 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.492432 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:32 crc kubenswrapper[4687]: E0312 16:02:32.493144 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.631734 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:32 crc kubenswrapper[4687]: E0312 16:02:32.631890 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.667187 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.737039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74e97c3a857178a31a41c1d4f6162b5d8ead0d19e61e795438da187251e3f663"} Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.738419 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e9bdb1899a8d8db6f06e562d3e14fc160ffe0834fc61dfe8080114c438f79c43"} Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.739757 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2e1deb1a2248a0b0be406742c77ac735087b2cbabe65ff242c6260990ec40deb"} Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.740992 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d5c08eec5018c061dcdfa98256b3500954703e36bce72a650c02a15be5d494b4"} Mar 12 16:02:32 crc kubenswrapper[4687]: I0312 16:02:32.742189 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ca90d02c3908e7667ec72fdf58793ac189af3e62675fabaa3b1b3da1a17b8d5a"} Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.803484 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:32 crc kubenswrapper[4687]: E0312 16:02:32.803577 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:32 crc kubenswrapper[4687]: W0312 16:02:32.912546 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:32 crc kubenswrapper[4687]: E0312 16:02:32.912694 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:33 crc kubenswrapper[4687]: E0312 16:02:33.082567 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Mar 12 16:02:33 crc kubenswrapper[4687]: W0312 16:02:33.290626 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:33 crc kubenswrapper[4687]: E0312 16:02:33.290712 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.294153 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.295546 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.295613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.295627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.295667 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:33 crc kubenswrapper[4687]: E0312 16:02:33.296649 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.667713 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.692030 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 16:02:33 crc kubenswrapper[4687]: E0312 16:02:33.693153 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.746507 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0527d0560eb93e46d556c72a3039dfdc3b1ec7c434c48d5d329e32c25c37feb7" exitCode=0 Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.746577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0527d0560eb93e46d556c72a3039dfdc3b1ec7c434c48d5d329e32c25c37feb7"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.746727 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.747844 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.747904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.747920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.748169 4687 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e" exitCode=0 Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.748215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.748334 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.749117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.749143 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.749152 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.749410 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="74e10309cc2cd0ad470b6262fb02d2dd95128372b1cdf1d7d4b8dff622824202" exitCode=0 Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.749503 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"74e10309cc2cd0ad470b6262fb02d2dd95128372b1cdf1d7d4b8dff622824202"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.749527 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.750534 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.750576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.750592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.751925 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6" exitCode=0 Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.751994 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.752033 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.753739 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.753782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.753798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.756001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca686e6d1502bc96386a1f6731e3f4d5dd6079254ce592a54dc077ce96421e7b"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.756032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11bebdcc89d12bea9860cd607c28ab44e3b4fc685aef0bbec6d3bd98c635d862"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.756047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5f74ac40be4fa37525553188242ff101a91c06e403b90d2103f6d2aec7503c8f"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.756059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f437980117f5d03ea8163287190b3dbb86c3c9545a22d9e2e4bfe7abae9300d"} Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.756062 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.757027 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.757052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.757060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.757671 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.758679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.758702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:33 crc kubenswrapper[4687]: I0312 16:02:33.758710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: E0312 16:02:34.001272 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c23780512845e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,LastTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.667313 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:34 crc kubenswrapper[4687]: E0312 16:02:34.684110 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.759594 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0d539005a16a8ec66822e16bccf6e67b120fa0cd404f5e517dcee28c86e3eea1" exitCode=0 Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.759675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0d539005a16a8ec66822e16bccf6e67b120fa0cd404f5e517dcee28c86e3eea1"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.759714 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.760663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.760686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.760696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.762079 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.762093 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.762627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.762656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.762666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.764894 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6559f1f5d8657249f3627c3e89604f95e9d1314eb9b2a2d056ed05ff564915ab"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.764925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"433683eea509a28f410173d26954cb90401d8a87b3dba9ca5aae17005f93b3d1"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.764936 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e1c4361fcbf38a2eed0fcf8d9de9c162cd6c7c7d295f05bd4df892bbc584ad9"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.765075 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.766294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.766398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.766420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768581 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2af9d553865a32fef7f1bf8a9860ea5b3b0f1a2bdd37f773a7ee43a92b038abf"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768647 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768657 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128"} Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.768714 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.769387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.769429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.769448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.769674 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.769726 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.769743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.896743 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.898201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.898257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.898276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:34 crc kubenswrapper[4687]: I0312 16:02:34.898308 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:34 crc kubenswrapper[4687]: E0312 16:02:34.898844 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Mar 12 16:02:34 crc kubenswrapper[4687]: W0312 16:02:34.924925 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Mar 12 16:02:34 crc kubenswrapper[4687]: E0312 16:02:34.925188 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.573462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.774948 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b5d26881d4fdfb5b10c665c78ed766c050cd0a0aa23be0620b02e270b19d4e07" exitCode=0 Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.774991 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b5d26881d4fdfb5b10c665c78ed766c050cd0a0aa23be0620b02e270b19d4e07"} Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.775053 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.775135 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.775186 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.775206 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.775279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.776684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.776702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.776711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.776864 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.776895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.776912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.777488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.777510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.777517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.777687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.777711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:35 crc kubenswrapper[4687]: I0312 16:02:35.777726 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785097 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8fab042380c07ada0f39e165495bdf52562e6e82d9dde1c247326f606153a7e3"} Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"60722aee2d9648b96d771a064a16c5eca9cd0e6ea4d66aaf3880af4a8edcebe1"} Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785158 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"896f6be08ff091db15b75647a68f037f2ca9ba71b64cd9e00d8b740ebe8e877a"} Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785184 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"373f8951ea72f3c906ba0ee959913890914ea207828b7e4ec17b7c0cee119bac"} Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3380dbdf6f6a94f422b93cc280dd66e299e6748370eabb34ca185ce74a15fb2a"} Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785203 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.785389 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786352 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786421 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786887 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:36 crc kubenswrapper[4687]: I0312 16:02:36.786900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.369754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.370041 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.372395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.372456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.372479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.787804 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.789120 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.789165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:37 crc kubenswrapper[4687]: I0312 16:02:37.789176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.061646 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.099640 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.101488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.101581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.101603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.101642 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.299650 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.299923 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.301482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.301558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:38 crc kubenswrapper[4687]: I0312 16:02:38.301584 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:39 crc kubenswrapper[4687]: I0312 16:02:39.182598 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:39 crc kubenswrapper[4687]: I0312 16:02:39.182842 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:39 crc kubenswrapper[4687]: I0312 16:02:39.184336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:39 crc kubenswrapper[4687]: I0312 16:02:39.184433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:39 crc kubenswrapper[4687]: I0312 16:02:39.184459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.369785 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.369912 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.586675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.587031 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.588873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.588925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:40 crc kubenswrapper[4687]: I0312 16:02:40.588943 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.122596 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.122898 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.125336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.125429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.125455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.129769 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.752686 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.752936 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.754562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.754642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.754662 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:41 crc kubenswrapper[4687]: E0312 16:02:41.794001 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.797277 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.798792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.798845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:41 crc kubenswrapper[4687]: I0312 16:02:41.798863 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.111756 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.116296 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.609512 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.799626 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.801013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.801041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:42 crc kubenswrapper[4687]: I0312 16:02:42.801050 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:43 crc kubenswrapper[4687]: I0312 16:02:43.801621 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:43 crc kubenswrapper[4687]: I0312 16:02:43.802386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:43 crc kubenswrapper[4687]: I0312 16:02:43.802407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:43 crc kubenswrapper[4687]: I0312 16:02:43.802415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.304574 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43318->192.168.126.11:17697: read: connection reset by peer" start-of-body= Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.304628 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43318->192.168.126.11:17697: read: connection reset by peer" Mar 12 16:02:45 crc kubenswrapper[4687]: W0312 16:02:45.393650 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.394034 4687 trace.go:236] Trace[1970539570]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Mar-2026 16:02:35.392) (total time: 10001ms): Mar 12 16:02:45 crc kubenswrapper[4687]: Trace[1970539570]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (16:02:45.393) Mar 12 16:02:45 crc kubenswrapper[4687]: Trace[1970539570]: [10.001182682s] [10.001182682s] END Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.394061 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 12 16:02:45 crc kubenswrapper[4687]: W0312 16:02:45.473819 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.473919 4687 trace.go:236] Trace[63728354]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Mar-2026 16:02:35.472) (total time: 10001ms): Mar 12 16:02:45 crc kubenswrapper[4687]: Trace[63728354]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:02:45.473) Mar 12 16:02:45 crc kubenswrapper[4687]: Trace[63728354]: [10.001098981s] [10.001098981s] END Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.473940 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 12 16:02:45 crc kubenswrapper[4687]: W0312 16:02:45.508602 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.508730 4687 trace.go:236] Trace[163280801]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Mar-2026 16:02:35.507) (total time: 10001ms): Mar 12 16:02:45 crc kubenswrapper[4687]: Trace[163280801]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:02:45.508) Mar 12 16:02:45 crc kubenswrapper[4687]: Trace[163280801]: [10.001151542s] [10.001151542s] END Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.508767 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.668482 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.809550 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.812075 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2af9d553865a32fef7f1bf8a9860ea5b3b0f1a2bdd37f773a7ee43a92b038abf" exitCode=255 Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.812127 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2af9d553865a32fef7f1bf8a9860ea5b3b0f1a2bdd37f773a7ee43a92b038abf"} Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.812288 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.813634 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.813679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.813689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.814202 4687 scope.go:117] "RemoveContainer" containerID="2af9d553865a32fef7f1bf8a9860ea5b3b0f1a2bdd37f773a7ee43a92b038abf" Mar 12 16:02:45 crc kubenswrapper[4687]: W0312 16:02:45.922851 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:45Z is after 2026-02-23T05:33:13Z Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.922945 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.925183 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.925233 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.927772 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:45Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c23780512845e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,LastTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.932403 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 16:02:45 crc kubenswrapper[4687]: I0312 16:02:45.932459 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.933044 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:45Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.933839 4687 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 16:02:45 crc kubenswrapper[4687]: E0312 16:02:45.935998 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:45Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.669517 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:46Z is after 2026-02-23T05:33:13Z Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.816717 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.817323 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.820868 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" exitCode=255 Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.820926 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8"} Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.820978 4687 scope.go:117] "RemoveContainer" containerID="2af9d553865a32fef7f1bf8a9860ea5b3b0f1a2bdd37f773a7ee43a92b038abf" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.821113 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.822832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.822857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.822867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:46 crc kubenswrapper[4687]: I0312 16:02:46.823312 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:02:46 crc kubenswrapper[4687]: E0312 16:02:46.823497 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:02:47 crc kubenswrapper[4687]: I0312 16:02:47.672473 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:47Z is after 2026-02-23T05:33:13Z Mar 12 16:02:47 crc kubenswrapper[4687]: I0312 16:02:47.825795 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.305638 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.305957 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.308089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.308156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.308179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.309183 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:02:48 crc kubenswrapper[4687]: E0312 16:02:48.309521 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.311684 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.671104 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:48Z is after 2026-02-23T05:33:13Z Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.831341 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.832807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.832866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.832885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:48 crc kubenswrapper[4687]: I0312 16:02:48.833708 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:02:48 crc kubenswrapper[4687]: E0312 16:02:48.833998 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:02:49 crc kubenswrapper[4687]: I0312 16:02:49.671457 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:49Z is after 2026-02-23T05:33:13Z Mar 12 16:02:50 crc kubenswrapper[4687]: W0312 16:02:50.022273 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:50Z is after 2026-02-23T05:33:13Z Mar 12 16:02:50 crc kubenswrapper[4687]: E0312 16:02:50.022408 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.370090 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.370183 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.502827 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.503483 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.506545 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.506620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.506639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.508001 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:02:50 crc kubenswrapper[4687]: E0312 16:02:50.508221 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.621401 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.621745 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.623429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.623461 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.623476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.641345 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.671267 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:50Z is after 2026-02-23T05:33:13Z Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.836494 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.837772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.837841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:50 crc kubenswrapper[4687]: I0312 16:02:50.837857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:51 crc kubenswrapper[4687]: W0312 16:02:51.288704 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:51Z is after 2026-02-23T05:33:13Z Mar 12 16:02:51 crc kubenswrapper[4687]: E0312 16:02:51.288809 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 16:02:51 crc kubenswrapper[4687]: W0312 16:02:51.310440 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:51Z is after 2026-02-23T05:33:13Z Mar 12 16:02:51 crc kubenswrapper[4687]: E0312 16:02:51.310540 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 16:02:51 crc kubenswrapper[4687]: I0312 16:02:51.672395 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:02:51Z is after 2026-02-23T05:33:13Z Mar 12 16:02:51 crc kubenswrapper[4687]: E0312 16:02:51.794118 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:02:52 crc kubenswrapper[4687]: I0312 16:02:52.336663 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:52 crc kubenswrapper[4687]: I0312 16:02:52.338239 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:52 crc kubenswrapper[4687]: I0312 16:02:52.338276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:52 crc kubenswrapper[4687]: I0312 16:02:52.338292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:52 crc kubenswrapper[4687]: I0312 16:02:52.338325 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:52 crc kubenswrapper[4687]: E0312 16:02:52.344295 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:02:52 crc kubenswrapper[4687]: E0312 16:02:52.344328 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:02:52 crc kubenswrapper[4687]: I0312 16:02:52.675426 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:53 crc kubenswrapper[4687]: I0312 16:02:53.674658 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.647298 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.668050 4687 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.673007 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.752940 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.753178 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.754509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.754537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.754548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:54 crc kubenswrapper[4687]: I0312 16:02:54.754982 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:02:54 crc kubenswrapper[4687]: E0312 16:02:54.755122 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:02:55 crc kubenswrapper[4687]: I0312 16:02:55.673829 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.935341 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c23780512845e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,LastTimestamp:2026-03-12 16:02:31.665280094 +0000 UTC m=+0.629242448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.940236 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.946604 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.951721 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.958270 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c23780c5241ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.7868979 +0000 UTC m=+0.750860244,LastTimestamp:2026-03-12 16:02:31.7868979 +0000 UTC m=+0.750860244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.964860 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.833688223 +0000 UTC m=+0.797650567,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.972031 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.833706224 +0000 UTC m=+0.797668568,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.979227 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081593ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.833713864 +0000 UTC m=+0.797676208,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.985479 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.834814089 +0000 UTC m=+0.798776433,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.992174 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.834832639 +0000 UTC m=+0.798794984,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:55 crc kubenswrapper[4687]: E0312 16:02:55.999098 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081593ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.83484037 +0000 UTC m=+0.798802714,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.003434 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.835409193 +0000 UTC m=+0.799371537,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.005172 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.835426924 +0000 UTC m=+0.799389258,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.010448 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081593ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.835434694 +0000 UTC m=+0.799397028,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.015288 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.835659239 +0000 UTC m=+0.799621583,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.019346 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.835676929 +0000 UTC m=+0.799639273,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.024863 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081593ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.835685019 +0000 UTC m=+0.799647363,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.034292 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.83747985 +0000 UTC m=+0.801442194,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.038352 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.83749251 +0000 UTC m=+0.801454854,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.042915 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081593ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.83750004 +0000 UTC m=+0.801462384,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.047172 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.837650254 +0000 UTC m=+0.801612598,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.050720 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.837659574 +0000 UTC m=+0.801621908,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.055123 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081593ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081593ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715812333 +0000 UTC m=+0.679774677,LastTimestamp:2026-03-12 16:02:31.837668994 +0000 UTC m=+0.801631338,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.058803 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c237808151fe8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c237808151fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715782632 +0000 UTC m=+0.679744966,LastTimestamp:2026-03-12 16:02:31.838049532 +0000 UTC m=+0.802011876,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.062869 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c2378081572b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c2378081572b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:31.715803833 +0000 UTC m=+0.679766177,LastTimestamp:2026-03-12 16:02:31.838060883 +0000 UTC m=+0.802023227,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.069010 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c2378267ef12f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.226033967 +0000 UTC m=+1.189996321,LastTimestamp:2026-03-12 16:02:32.226033967 +0000 UTC m=+1.189996321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.074175 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c23782685cb3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.226483007 +0000 UTC m=+1.190445351,LastTimestamp:2026-03-12 16:02:32.226483007 +0000 UTC m=+1.190445351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.077855 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c237826cf0bf0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.231283696 +0000 UTC m=+1.195246040,LastTimestamp:2026-03-12 16:02:32.231283696 +0000 UTC m=+1.195246040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.081063 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c237827f38f02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.250453762 +0000 UTC m=+1.214416126,LastTimestamp:2026-03-12 16:02:32.250453762 +0000 UTC m=+1.214416126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.086909 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c237827fd45c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.251090376 +0000 UTC m=+1.215052720,LastTimestamp:2026-03-12 16:02:32.251090376 +0000 UTC m=+1.215052720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.090890 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378490fafbb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.805945275 +0000 UTC m=+1.769907619,LastTimestamp:2026-03-12 16:02:32.805945275 +0000 UTC m=+1.769907619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.098434 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c237849114f18 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.806051608 +0000 UTC m=+1.770013952,LastTimestamp:2026-03-12 16:02:32.806051608 +0000 UTC m=+1.770013952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.102822 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23784911b598 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.806077848 +0000 UTC m=+1.770040192,LastTimestamp:2026-03-12 16:02:32.806077848 +0000 UTC m=+1.770040192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.106304 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23784916912d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.806396205 +0000 UTC m=+1.770358549,LastTimestamp:2026-03-12 16:02:32.806396205 +0000 UTC m=+1.770358549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.110094 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c2378491e2894 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.806893716 +0000 UTC m=+1.770856060,LastTimestamp:2026-03-12 16:02:32.806893716 +0000 UTC m=+1.770856060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.116616 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c237849a5a5ab openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.815773099 +0000 UTC m=+1.779735453,LastTimestamp:2026-03-12 16:02:32.815773099 +0000 UTC m=+1.779735453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.123827 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237849c961af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.818114991 +0000 UTC m=+1.782077335,LastTimestamp:2026-03-12 16:02:32.818114991 +0000 UTC m=+1.782077335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.129999 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237849d899c0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.819112384 +0000 UTC m=+1.783074728,LastTimestamp:2026-03-12 16:02:32.819112384 +0000 UTC m=+1.783074728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.138019 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237849d8a348 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.819114824 +0000 UTC m=+1.783077208,LastTimestamp:2026-03-12 16:02:32.819114824 +0000 UTC m=+1.783077208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.145263 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c237849d8a366 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.819114854 +0000 UTC m=+1.783077228,LastTimestamp:2026-03-12 16:02:32.819114854 +0000 UTC m=+1.783077228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.152200 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c23784a8dada7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.830979495 +0000 UTC m=+1.794941839,LastTimestamp:2026-03-12 16:02:32.830979495 +0000 UTC m=+1.794941839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.158856 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23785cb8e6dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.135802077 +0000 UTC m=+2.099764421,LastTimestamp:2026-03-12 16:02:33.135802077 +0000 UTC m=+2.099764421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.169446 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23785d4e9082 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.14561037 +0000 UTC m=+2.109572714,LastTimestamp:2026-03-12 16:02:33.14561037 +0000 UTC m=+2.109572714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.176051 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23785d5d4f7a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.146576762 +0000 UTC m=+2.110539106,LastTimestamp:2026-03-12 16:02:33.146576762 +0000 UTC m=+2.110539106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.184069 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23786714294a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.309555018 +0000 UTC m=+2.273517362,LastTimestamp:2026-03-12 16:02:33.309555018 +0000 UTC m=+2.273517362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.188924 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237867a9b265 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.319354981 +0000 UTC m=+2.283317335,LastTimestamp:2026-03-12 16:02:33.319354981 +0000 UTC m=+2.283317335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.194638 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237867bac4e3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.320473827 +0000 UTC m=+2.284436171,LastTimestamp:2026-03-12 16:02:33.320473827 +0000 UTC m=+2.284436171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.199909 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23786fa8f3e6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.453523942 +0000 UTC m=+2.417486286,LastTimestamp:2026-03-12 16:02:33.453523942 +0000 UTC m=+2.417486286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.206586 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23787077947f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.467065471 +0000 UTC m=+2.431027835,LastTimestamp:2026-03-12 16:02:33.467065471 +0000 UTC m=+2.431027835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.210679 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c2378815944ce openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.750291662 +0000 UTC m=+2.714254006,LastTimestamp:2026-03-12 16:02:33.750291662 +0000 UTC m=+2.714254006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.215765 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23788167a49d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.751233693 +0000 UTC m=+2.715196047,LastTimestamp:2026-03-12 16:02:33.751233693 +0000 UTC m=+2.715196047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.219574 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c23788179db25 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.752427301 +0000 UTC m=+2.716389655,LastTimestamp:2026-03-12 16:02:33.752427301 +0000 UTC m=+2.716389655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.224309 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237881c6df05 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.757474565 +0000 UTC m=+2.721436919,LastTimestamp:2026-03-12 16:02:33.757474565 +0000 UTC m=+2.721436919,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.229157 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c23788b92e47a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.92184025 +0000 UTC m=+2.885802594,LastTimestamp:2026-03-12 16:02:33.92184025 +0000 UTC m=+2.885802594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.233633 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c23788b9884b6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.92220895 +0000 UTC m=+2.886171294,LastTimestamp:2026-03-12 16:02:33.92220895 +0000 UTC m=+2.886171294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.236868 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c23788c4b9233 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.933943347 +0000 UTC m=+2.897905691,LastTimestamp:2026-03-12 16:02:33.933943347 +0000 UTC m=+2.897905691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.241713 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c23788c5abbca openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.934937034 +0000 UTC m=+2.898899378,LastTimestamp:2026-03-12 16:02:33.934937034 +0000 UTC m=+2.898899378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.247473 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c23788c66003e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.935675454 +0000 UTC m=+2.899637798,LastTimestamp:2026-03-12 16:02:33.935675454 +0000 UTC m=+2.899637798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.251992 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c23788c8b3e0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.938116109 +0000 UTC m=+2.902078453,LastTimestamp:2026-03-12 16:02:33.938116109 +0000 UTC m=+2.902078453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.257215 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23788d8ea5bf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.955116479 +0000 UTC m=+2.919078823,LastTimestamp:2026-03-12 16:02:33.955116479 +0000 UTC m=+2.919078823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.262581 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c23788da03380 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.95626688 +0000 UTC m=+2.920229224,LastTimestamp:2026-03-12 16:02:33.95626688 +0000 UTC m=+2.920229224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.267390 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c23788dad0f4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.957109583 +0000 UTC m=+2.921071927,LastTimestamp:2026-03-12 16:02:33.957109583 +0000 UTC m=+2.921071927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.272265 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23788f6df2f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.986527988 +0000 UTC m=+2.950490332,LastTimestamp:2026-03-12 16:02:33.986527988 +0000 UTC m=+2.950490332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.276579 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c2378963c4b52 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.100714322 +0000 UTC m=+3.064676666,LastTimestamp:2026-03-12 16:02:34.100714322 +0000 UTC m=+3.064676666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.280703 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237896ad199b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.108107163 +0000 UTC m=+3.072069507,LastTimestamp:2026-03-12 16:02:34.108107163 +0000 UTC m=+3.072069507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.284903 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c237896d7bd18 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.110901528 +0000 UTC m=+3.074863872,LastTimestamp:2026-03-12 16:02:34.110901528 +0000 UTC m=+3.074863872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.289953 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c237896e5ffef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.111836143 +0000 UTC m=+3.075798487,LastTimestamp:2026-03-12 16:02:34.111836143 +0000 UTC m=+3.075798487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.293538 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237897af4456 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.12502639 +0000 UTC m=+3.088988734,LastTimestamp:2026-03-12 16:02:34.12502639 +0000 UTC m=+3.088988734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.300908 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237897bfa291 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.126099089 +0000 UTC m=+3.090061433,LastTimestamp:2026-03-12 16:02:34.126099089 +0000 UTC m=+3.090061433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.305812 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378a179a2cb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.289283787 +0000 UTC m=+3.253246151,LastTimestamp:2026-03-12 16:02:34.289283787 +0000 UTC m=+3.253246151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.309801 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c2378a18c6379 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.290512761 +0000 UTC m=+3.254475105,LastTimestamp:2026-03-12 16:02:34.290512761 +0000 UTC m=+3.254475105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.313293 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c2378a273a35a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.30566793 +0000 UTC m=+3.269630264,LastTimestamp:2026-03-12 16:02:34.30566793 +0000 UTC m=+3.269630264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.316813 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378a2a22fa7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.308718503 +0000 UTC m=+3.272680847,LastTimestamp:2026-03-12 16:02:34.308718503 +0000 UTC m=+3.272680847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.320029 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378a2b27e56 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.309787222 +0000 UTC m=+3.273749566,LastTimestamp:2026-03-12 16:02:34.309787222 +0000 UTC m=+3.273749566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.323123 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378ac4b1856 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.470783062 +0000 UTC m=+3.434745406,LastTimestamp:2026-03-12 16:02:34.470783062 +0000 UTC m=+3.434745406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.326389 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378acf25f9a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.481745818 +0000 UTC m=+3.445708162,LastTimestamp:2026-03-12 16:02:34.481745818 +0000 UTC m=+3.445708162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.329573 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378ad0167fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.482731004 +0000 UTC m=+3.446693348,LastTimestamp:2026-03-12 16:02:34.482731004 +0000 UTC m=+3.446693348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.333001 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378b6e84d61 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.648857953 +0000 UTC m=+3.612820297,LastTimestamp:2026-03-12 16:02:34.648857953 +0000 UTC m=+3.612820297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.337066 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378b77b97e8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.658510824 +0000 UTC m=+3.622473168,LastTimestamp:2026-03-12 16:02:34.658510824 +0000 UTC m=+3.622473168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.341126 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c2378bda4ccea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.761874666 +0000 UTC m=+3.725837010,LastTimestamp:2026-03-12 16:02:34.761874666 +0000 UTC m=+3.725837010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.345587 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c2378cae4d126 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.984173862 +0000 UTC m=+3.948136206,LastTimestamp:2026-03-12 16:02:34.984173862 +0000 UTC m=+3.948136206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.349625 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c2378cb99fd13 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.996047123 +0000 UTC m=+3.960009467,LastTimestamp:2026-03-12 16:02:34.996047123 +0000 UTC m=+3.960009467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.354468 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c2378fa43dfff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:35.778932735 +0000 UTC m=+4.742895079,LastTimestamp:2026-03-12 16:02:35.778932735 +0000 UTC m=+4.742895079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.358273 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c237905f495b4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:35.975062964 +0000 UTC m=+4.939025308,LastTimestamp:2026-03-12 16:02:35.975062964 +0000 UTC m=+4.939025308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.361851 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c237906684280 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:35.98264384 +0000 UTC m=+4.946606204,LastTimestamp:2026-03-12 16:02:35.98264384 +0000 UTC m=+4.946606204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.365063 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23790679c8fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:35.98379238 +0000 UTC m=+4.947754734,LastTimestamp:2026-03-12 16:02:35.98379238 +0000 UTC m=+4.947754734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.368832 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23790f801b7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.135201662 +0000 UTC m=+5.099164026,LastTimestamp:2026-03-12 16:02:36.135201662 +0000 UTC m=+5.099164026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.372279 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23791015a486 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.145001606 +0000 UTC m=+5.108963960,LastTimestamp:2026-03-12 16:02:36.145001606 +0000 UTC m=+5.108963960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.375814 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23791022f7f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.145874929 +0000 UTC m=+5.109837273,LastTimestamp:2026-03-12 16:02:36.145874929 +0000 UTC m=+5.109837273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.379068 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23791a08e174 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.311937396 +0000 UTC m=+5.275899740,LastTimestamp:2026-03-12 16:02:36.311937396 +0000 UTC m=+5.275899740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.382504 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23791accb97f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.324772223 +0000 UTC m=+5.288734567,LastTimestamp:2026-03-12 16:02:36.324772223 +0000 UTC m=+5.288734567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.385609 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23791ada34f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.325655797 +0000 UTC m=+5.289618151,LastTimestamp:2026-03-12 16:02:36.325655797 +0000 UTC m=+5.289618151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.388890 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c237924bc270c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.491458316 +0000 UTC m=+5.455420670,LastTimestamp:2026-03-12 16:02:36.491458316 +0000 UTC m=+5.455420670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.392076 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23792556522b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.501561899 +0000 UTC m=+5.465524243,LastTimestamp:2026-03-12 16:02:36.501561899 +0000 UTC m=+5.465524243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.395543 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c23792566a116 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.502630678 +0000 UTC m=+5.466593022,LastTimestamp:2026-03-12 16:02:36.502630678 +0000 UTC m=+5.466593022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.399037 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c237931b3b374 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.709008244 +0000 UTC m=+5.672970598,LastTimestamp:2026-03-12 16:02:36.709008244 +0000 UTC m=+5.672970598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.404064 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c2379329faa0c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:36.724472332 +0000 UTC m=+5.688434686,LastTimestamp:2026-03-12 16:02:36.724472332 +0000 UTC m=+5.688434686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.410340 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 16:02:56 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189c237a0be81616 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 12 16:02:56 crc kubenswrapper[4687]: body: Mar 12 16:02:56 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:40.369874454 +0000 UTC m=+9.333836838,LastTimestamp:2026-03-12 16:02:40.369874454 +0000 UTC m=+9.333836838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 16:02:56 crc kubenswrapper[4687]: > Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.414013 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237a0be9555a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:40.369956186 +0000 UTC m=+9.333918560,LastTimestamp:2026-03-12 16:02:40.369956186 +0000 UTC m=+9.333918560,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.416117 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 16:02:56 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-apiserver-crc.189c237b320a3c0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:43318->192.168.126.11:17697: read: connection reset by peer Mar 12 16:02:56 crc kubenswrapper[4687]: body: Mar 12 16:02:56 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:45.3046139 +0000 UTC m=+14.268576244,LastTimestamp:2026-03-12 16:02:45.3046139 +0000 UTC m=+14.268576244,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 16:02:56 crc kubenswrapper[4687]: > Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.419858 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237b320acdf3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43318->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:45.304651251 +0000 UTC m=+14.268613595,LastTimestamp:2026-03-12 16:02:45.304651251 +0000 UTC m=+14.268613595,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.423205 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c2378ad0167fc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c2378ad0167fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:34.482731004 +0000 UTC m=+3.446693348,LastTimestamp:2026-03-12 16:02:45.815351519 +0000 UTC m=+14.779313863,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.426602 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 16:02:56 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-apiserver-crc.189c237b5707ecbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 16:02:56 crc kubenswrapper[4687]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 16:02:56 crc kubenswrapper[4687]: Mar 12 16:02:56 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:45.925219518 +0000 UTC m=+14.889181872,LastTimestamp:2026-03-12 16:02:45.925219518 +0000 UTC m=+14.889181872,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 16:02:56 crc kubenswrapper[4687]: > Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.429768 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237b57087b0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:45.925255949 +0000 UTC m=+14.889218303,LastTimestamp:2026-03-12 16:02:45.925255949 +0000 UTC m=+14.889218303,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.433105 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c237b5707ecbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 16:02:56 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-apiserver-crc.189c237b5707ecbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 16:02:56 crc kubenswrapper[4687]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 16:02:56 crc kubenswrapper[4687]: Mar 12 16:02:56 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:45.925219518 +0000 UTC m=+14.889181872,LastTimestamp:2026-03-12 16:02:45.932442683 +0000 UTC m=+14.896405047,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 16:02:56 crc kubenswrapper[4687]: > Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.436866 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c237b57087b0d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c237b57087b0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:45.925255949 +0000 UTC m=+14.889218303,LastTimestamp:2026-03-12 16:02:45.932483384 +0000 UTC m=+14.896445738,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.440988 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 16:02:56 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189c237c5ff838b7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 16:02:56 crc kubenswrapper[4687]: body: Mar 12 16:02:56 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:50.370152631 +0000 UTC m=+19.334114985,LastTimestamp:2026-03-12 16:02:50.370152631 +0000 UTC m=+19.334114985,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 16:02:56 crc kubenswrapper[4687]: > Mar 12 16:02:56 crc kubenswrapper[4687]: E0312 16:02:56.444057 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237c5ff9a678 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:50.370246264 +0000 UTC m=+19.334208618,LastTimestamp:2026-03-12 16:02:50.370246264 +0000 UTC m=+19.334208618,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:02:56 crc kubenswrapper[4687]: I0312 16:02:56.671902 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:57 crc kubenswrapper[4687]: I0312 16:02:57.670829 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:58 crc kubenswrapper[4687]: W0312 16:02:58.252401 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 16:02:58 crc kubenswrapper[4687]: E0312 16:02:58.252494 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 16:02:58 crc kubenswrapper[4687]: I0312 16:02:58.673823 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:02:59 crc kubenswrapper[4687]: I0312 16:02:59.344729 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:02:59 crc kubenswrapper[4687]: I0312 16:02:59.346520 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:02:59 crc kubenswrapper[4687]: I0312 16:02:59.346591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:02:59 crc kubenswrapper[4687]: I0312 16:02:59.346611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:02:59 crc kubenswrapper[4687]: I0312 16:02:59.346648 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:02:59 crc kubenswrapper[4687]: E0312 16:02:59.351917 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:02:59 crc kubenswrapper[4687]: E0312 16:02:59.352159 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:02:59 crc kubenswrapper[4687]: I0312 16:02:59.673641 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:00 crc kubenswrapper[4687]: W0312 16:03:00.277943 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 16:03:00 crc kubenswrapper[4687]: E0312 16:03:00.278006 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.370690 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.370799 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.370919 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.371106 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.373039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.373084 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.373102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.373801 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"5f74ac40be4fa37525553188242ff101a91c06e403b90d2103f6d2aec7503c8f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.374054 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://5f74ac40be4fa37525553188242ff101a91c06e403b90d2103f6d2aec7503c8f" gracePeriod=30 Mar 12 16:03:00 crc kubenswrapper[4687]: E0312 16:03:00.379701 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c237c5ff838b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 16:03:00 crc kubenswrapper[4687]: &Event{ObjectMeta:{kube-controller-manager-crc.189c237c5ff838b7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 16:03:00 crc kubenswrapper[4687]: body: Mar 12 16:03:00 crc kubenswrapper[4687]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:50.370152631 +0000 UTC m=+19.334114985,LastTimestamp:2026-03-12 16:03:00.37077252 +0000 UTC m=+29.334734904,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 16:03:00 crc kubenswrapper[4687]: > Mar 12 16:03:00 crc kubenswrapper[4687]: E0312 16:03:00.383827 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c237c5ff9a678\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237c5ff9a678 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:50.370246264 +0000 UTC m=+19.334208618,LastTimestamp:2026-03-12 16:03:00.370880153 +0000 UTC m=+29.334842537,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:03:00 crc kubenswrapper[4687]: E0312 16:03:00.391222 4687 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237eb43f3ba0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:03:00.374027168 +0000 UTC m=+29.337989552,LastTimestamp:2026-03-12 16:03:00.374027168 +0000 UTC m=+29.337989552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:03:00 crc kubenswrapper[4687]: E0312 16:03:00.496893 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c237849d899c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c237849d899c0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:32.819112384 +0000 UTC m=+1.783074728,LastTimestamp:2026-03-12 16:03:00.492238372 +0000 UTC m=+29.456200726,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.671903 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:00 crc kubenswrapper[4687]: E0312 16:03:00.694226 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c23785cb8e6dd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23785cb8e6dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.135802077 +0000 UTC m=+2.099764421,LastTimestamp:2026-03-12 16:03:00.689768289 +0000 UTC m=+29.653730633,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.869528 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.869900 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5f74ac40be4fa37525553188242ff101a91c06e403b90d2103f6d2aec7503c8f" exitCode=255 Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.869950 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5f74ac40be4fa37525553188242ff101a91c06e403b90d2103f6d2aec7503c8f"} Mar 12 16:03:00 crc kubenswrapper[4687]: I0312 16:03:00.869998 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"222e881c2080a65bf1607759278a44a02f7a0c195b57ce27b88c925d5d8186e7"} Mar 12 16:03:01 crc kubenswrapper[4687]: E0312 16:03:01.388679 4687 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c23785d4e9082\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c23785d4e9082 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:02:33.14561037 +0000 UTC m=+2.109572714,LastTimestamp:2026-03-12 16:03:01.382773372 +0000 UTC m=+30.346735766,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:03:01 crc kubenswrapper[4687]: I0312 16:03:01.674556 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:01 crc kubenswrapper[4687]: E0312 16:03:01.794431 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:03:01 crc kubenswrapper[4687]: I0312 16:03:01.872398 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:01 crc kubenswrapper[4687]: I0312 16:03:01.873565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:01 crc kubenswrapper[4687]: I0312 16:03:01.873604 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:01 crc kubenswrapper[4687]: I0312 16:03:01.873614 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:02 crc kubenswrapper[4687]: W0312 16:03:02.251353 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:02 crc kubenswrapper[4687]: E0312 16:03:02.251452 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:02 crc kubenswrapper[4687]: I0312 16:03:02.609510 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:03:02 crc kubenswrapper[4687]: I0312 16:03:02.672218 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:02 crc kubenswrapper[4687]: I0312 16:03:02.874942 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:02 crc kubenswrapper[4687]: I0312 16:03:02.876160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:02 crc kubenswrapper[4687]: I0312 16:03:02.876229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:02 crc kubenswrapper[4687]: I0312 16:03:02.876266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:02 crc kubenswrapper[4687]: W0312 16:03:02.948496 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 16:03:02 crc kubenswrapper[4687]: E0312 16:03:02.948580 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:03 crc kubenswrapper[4687]: I0312 16:03:03.673334 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:04 crc kubenswrapper[4687]: I0312 16:03:04.671475 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:05 crc kubenswrapper[4687]: I0312 16:03:05.673716 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:06 crc kubenswrapper[4687]: I0312 16:03:06.353038 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:06 crc kubenswrapper[4687]: I0312 16:03:06.354990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:06 crc kubenswrapper[4687]: I0312 16:03:06.355031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:06 crc kubenswrapper[4687]: I0312 16:03:06.355040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:06 crc kubenswrapper[4687]: I0312 16:03:06.355064 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:03:06 crc kubenswrapper[4687]: E0312 16:03:06.359381 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:03:06 crc kubenswrapper[4687]: E0312 16:03:06.366453 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:03:06 crc kubenswrapper[4687]: I0312 16:03:06.672995 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.370573 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.370741 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.371822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.371858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.371871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.385143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.670866 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.888726 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.889441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.889480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:07 crc kubenswrapper[4687]: I0312 16:03:07.889495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:08 crc kubenswrapper[4687]: I0312 16:03:08.671565 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:09 crc kubenswrapper[4687]: I0312 16:03:09.680664 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:09 crc kubenswrapper[4687]: I0312 16:03:09.732769 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:09 crc kubenswrapper[4687]: I0312 16:03:09.734180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:09 crc kubenswrapper[4687]: I0312 16:03:09.734260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:09 crc kubenswrapper[4687]: I0312 16:03:09.734279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:09 crc kubenswrapper[4687]: I0312 16:03:09.735200 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.670461 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.898899 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.899348 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.900934 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046" exitCode=255 Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.900988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046"} Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.901022 4687 scope.go:117] "RemoveContainer" containerID="6f8cd67d244d1d5a0ef9292fe7dd83496edc54bd7e29863a34ac5286b7b3d4c8" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.901149 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.902028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.902061 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.902073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:10 crc kubenswrapper[4687]: I0312 16:03:10.902573 4687 scope.go:117] "RemoveContainer" containerID="664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046" Mar 12 16:03:10 crc kubenswrapper[4687]: E0312 16:03:10.902771 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:11 crc kubenswrapper[4687]: I0312 16:03:11.673411 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:11 crc kubenswrapper[4687]: E0312 16:03:11.794580 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:03:11 crc kubenswrapper[4687]: I0312 16:03:11.905862 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 16:03:12 crc kubenswrapper[4687]: I0312 16:03:12.615634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:03:12 crc kubenswrapper[4687]: I0312 16:03:12.615797 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:12 crc kubenswrapper[4687]: I0312 16:03:12.617026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:12 crc kubenswrapper[4687]: I0312 16:03:12.617101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:12 crc kubenswrapper[4687]: I0312 16:03:12.617123 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:12 crc kubenswrapper[4687]: I0312 16:03:12.672873 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:13 crc kubenswrapper[4687]: I0312 16:03:13.359933 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:13 crc kubenswrapper[4687]: I0312 16:03:13.361414 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:13 crc kubenswrapper[4687]: I0312 16:03:13.361463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:13 crc kubenswrapper[4687]: I0312 16:03:13.361482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:13 crc kubenswrapper[4687]: I0312 16:03:13.361516 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:03:13 crc kubenswrapper[4687]: E0312 16:03:13.366589 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:03:13 crc kubenswrapper[4687]: E0312 16:03:13.370939 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:03:13 crc kubenswrapper[4687]: W0312 16:03:13.644632 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 16:03:13 crc kubenswrapper[4687]: E0312 16:03:13.644721 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:13 crc kubenswrapper[4687]: I0312 16:03:13.673862 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.674189 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.752498 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.752766 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.754223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.754289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.754306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:14 crc kubenswrapper[4687]: I0312 16:03:14.755197 4687 scope.go:117] "RemoveContainer" containerID="664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046" Mar 12 16:03:14 crc kubenswrapper[4687]: E0312 16:03:14.755564 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:15 crc kubenswrapper[4687]: I0312 16:03:15.677682 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:16 crc kubenswrapper[4687]: I0312 16:03:16.669389 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:17 crc kubenswrapper[4687]: I0312 16:03:17.673537 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:18 crc kubenswrapper[4687]: I0312 16:03:18.674745 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:19 crc kubenswrapper[4687]: I0312 16:03:19.674905 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.367752 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.369593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.369667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.369682 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.369717 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:03:20 crc kubenswrapper[4687]: E0312 16:03:20.376986 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:03:20 crc kubenswrapper[4687]: E0312 16:03:20.377016 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.502745 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.502981 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.504435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.504485 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.504496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.505040 4687 scope.go:117] "RemoveContainer" containerID="664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046" Mar 12 16:03:20 crc kubenswrapper[4687]: E0312 16:03:20.505209 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:20 crc kubenswrapper[4687]: I0312 16:03:20.673660 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:21 crc kubenswrapper[4687]: W0312 16:03:21.032742 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 16:03:21 crc kubenswrapper[4687]: E0312 16:03:21.033679 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:21 crc kubenswrapper[4687]: I0312 16:03:21.671008 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:21 crc kubenswrapper[4687]: E0312 16:03:21.794900 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:03:22 crc kubenswrapper[4687]: I0312 16:03:22.673059 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:23 crc kubenswrapper[4687]: I0312 16:03:23.673514 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:24 crc kubenswrapper[4687]: I0312 16:03:24.673640 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:24 crc kubenswrapper[4687]: W0312 16:03:24.933311 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 16:03:24 crc kubenswrapper[4687]: E0312 16:03:24.933390 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:25 crc kubenswrapper[4687]: W0312 16:03:25.561692 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:25 crc kubenswrapper[4687]: E0312 16:03:25.561766 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 16:03:25 crc kubenswrapper[4687]: I0312 16:03:25.580641 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 16:03:25 crc kubenswrapper[4687]: I0312 16:03:25.580867 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:25 crc kubenswrapper[4687]: I0312 16:03:25.582176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:25 crc kubenswrapper[4687]: I0312 16:03:25.582225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:25 crc kubenswrapper[4687]: I0312 16:03:25.582242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:25 crc kubenswrapper[4687]: I0312 16:03:25.673647 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:26 crc kubenswrapper[4687]: I0312 16:03:26.674591 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:27 crc kubenswrapper[4687]: I0312 16:03:27.377228 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:27 crc kubenswrapper[4687]: I0312 16:03:27.378547 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:27 crc kubenswrapper[4687]: I0312 16:03:27.378589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:27 crc kubenswrapper[4687]: I0312 16:03:27.378602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:27 crc kubenswrapper[4687]: I0312 16:03:27.378630 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:03:27 crc kubenswrapper[4687]: E0312 16:03:27.384244 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:03:27 crc kubenswrapper[4687]: E0312 16:03:27.384425 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:03:27 crc kubenswrapper[4687]: I0312 16:03:27.673108 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:28 crc kubenswrapper[4687]: I0312 16:03:28.674055 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:29 crc kubenswrapper[4687]: I0312 16:03:29.673765 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:30 crc kubenswrapper[4687]: I0312 16:03:30.671888 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:31 crc kubenswrapper[4687]: I0312 16:03:31.672631 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:31 crc kubenswrapper[4687]: E0312 16:03:31.795614 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:03:32 crc kubenswrapper[4687]: I0312 16:03:32.674092 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:33 crc kubenswrapper[4687]: I0312 16:03:33.669563 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.384532 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.386089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.386120 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.386129 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.386152 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:03:34 crc kubenswrapper[4687]: E0312 16:03:34.388421 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 16:03:34 crc kubenswrapper[4687]: E0312 16:03:34.388515 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.671310 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.732309 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.733164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.733205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.733216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.733742 4687 scope.go:117] "RemoveContainer" containerID="664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.972117 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.974087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183"} Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.974221 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.975945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.975970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:34 crc kubenswrapper[4687]: I0312 16:03:34.975980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.670298 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.816717 4687 csr.go:261] certificate signing request csr-vz596 is approved, waiting to be issued Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.823470 4687 csr.go:257] certificate signing request csr-vz596 is issued Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.878600 4687 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.977663 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.978525 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.980146 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" exitCode=255 Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.980187 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183"} Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.980226 4687 scope.go:117] "RemoveContainer" containerID="664371ab6bfef44623dc53d493d27877c986286d0e55547a8cea9b4c3201d046" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.980395 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.981348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.981407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.981419 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:35 crc kubenswrapper[4687]: I0312 16:03:35.982065 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:03:35 crc kubenswrapper[4687]: E0312 16:03:35.982294 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:36 crc kubenswrapper[4687]: I0312 16:03:36.529285 4687 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 16:03:36 crc kubenswrapper[4687]: I0312 16:03:36.825028 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-05 19:12:29.956790506 +0000 UTC Mar 12 16:03:36 crc kubenswrapper[4687]: I0312 16:03:36.825070 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6435h8m53.131723564s for next certificate rotation Mar 12 16:03:36 crc kubenswrapper[4687]: I0312 16:03:36.983024 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 16:03:40 crc kubenswrapper[4687]: I0312 16:03:40.502900 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:03:40 crc kubenswrapper[4687]: I0312 16:03:40.503099 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:40 crc kubenswrapper[4687]: I0312 16:03:40.504483 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:40 crc kubenswrapper[4687]: I0312 16:03:40.504565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:40 crc kubenswrapper[4687]: I0312 16:03:40.504585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:40 crc kubenswrapper[4687]: I0312 16:03:40.505426 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:03:40 crc kubenswrapper[4687]: E0312 16:03:40.505703 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.389549 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.391001 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.391059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.391077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.391215 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.399877 4687 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.400223 4687 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.400255 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.404274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.404345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.404415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.404445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.404467 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:41Z","lastTransitionTime":"2026-03-12T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.421783 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.436826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.436873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.436884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.436902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.436914 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:41Z","lastTransitionTime":"2026-03-12T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.448852 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.455125 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.455168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.455180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.455200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.455212 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:41Z","lastTransitionTime":"2026-03-12T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.466821 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.476503 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.476568 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.476590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.476620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:41 crc kubenswrapper[4687]: I0312 16:03:41.476643 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:41Z","lastTransitionTime":"2026-03-12T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.488125 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.488228 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.488248 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.589400 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.690199 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.791027 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.796324 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.891528 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:41 crc kubenswrapper[4687]: E0312 16:03:41.992562 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.093460 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.193901 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.294060 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.394850 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.495024 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.595387 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.696399 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.796595 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.896836 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:42 crc kubenswrapper[4687]: E0312 16:03:42.998043 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.098621 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.198947 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.299979 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.400964 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.464583 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.503066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.503106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.503118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.503136 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.503148 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:43Z","lastTransitionTime":"2026-03-12T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.606525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.606570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.606578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.606593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.606606 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:43Z","lastTransitionTime":"2026-03-12T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.692867 4687 apiserver.go:52] "Watching apiserver" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.701247 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.701701 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-657xn","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-nj99p","openshift-multus/multus-additional-cni-plugins-gnhsw","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-multus/multus-9k44w","openshift-multus/network-metrics-daemon-d4g6l","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-image-registry/node-ca-dt4jw","openshift-machine-config-operator/machine-config-daemon-bxjh2"] Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.702110 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.702455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.702482 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.702491 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.702852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.702839 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.702912 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.702930 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.703025 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.703255 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.703278 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.703439 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.703284 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.703728 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.703736 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.704226 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.705456 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.705997 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.708541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.708576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.708594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.708615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.708632 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:43Z","lastTransitionTime":"2026-03-12T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.723118 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.723465 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.724651 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.725079 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.725437 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.725958 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.726300 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.729757 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.729933 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.730888 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.731123 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.731339 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.731402 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.731627 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.731710 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.732257 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.732526 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.733880 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.734019 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.735535 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737210 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737237 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737249 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737325 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737376 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737400 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737456 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737485 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737215 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737614 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737722 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737841 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.737649 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.738314 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.739630 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.741090 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.742302 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.767851 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.771954 4687 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.779225 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.796145 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.806026 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.811219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.811250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.811262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.811278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.811289 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:43Z","lastTransitionTime":"2026-03-12T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.815677 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.830719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.830775 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.830803 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.830831 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.830861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831134 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831254 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831421 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831495 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831582 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831622 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831635 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831660 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831686 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831710 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831756 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831762 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831862 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832071 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832096 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832117 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832166 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832189 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832214 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832236 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832257 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832323 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832345 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832402 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832425 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832447 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832514 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832538 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832560 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832595 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832617 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832640 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832663 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832688 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832797 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832820 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832844 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832867 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832917 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832945 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832967 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832990 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833036 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833079 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833040 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833132 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833154 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833175 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833197 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833324 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833708 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833736 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833760 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833842 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833867 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833889 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833912 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833936 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833968 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833998 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835804 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835836 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835863 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835886 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835975 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836002 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836025 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836049 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836071 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836095 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836118 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836140 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836162 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836184 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836213 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836240 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836266 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.831897 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832043 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832241 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832418 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832650 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832704 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.832959 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833125 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833238 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833443 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.833968 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834056 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834092 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834520 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.834829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836572 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835393 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835464 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835721 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836613 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835775 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.835785 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836293 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836776 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836807 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836832 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836881 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837016 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.836993 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837051 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837079 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837226 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837285 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837347 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837123 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837753 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837785 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837811 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837864 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837888 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837913 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837937 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837959 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.837982 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838033 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838055 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838077 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838123 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838150 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838172 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838189 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838227 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838226 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838243 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838261 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838298 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838334 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838384 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838426 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838442 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838469 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838521 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838545 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838568 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838592 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838617 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838644 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838666 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838752 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.838808 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839032 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839061 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839079 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839102 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839127 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839153 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839205 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839255 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839278 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839326 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839343 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839388 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839408 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839445 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839462 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839479 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839516 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839534 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839552 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839573 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839591 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839630 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839646 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839691 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839714 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839738 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839763 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839840 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839884 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839959 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839986 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840007 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840074 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840112 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840197 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-cni-multus\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-systemd\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840283 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4ghw\" (UniqueName: \"kubernetes.io/projected/a785ed51-b59b-4ec7-b31c-a66279b9151c-kube-api-access-n4ghw\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-system-cni-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840314 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-cni-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-k8s-cni-cncf-io\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840346 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-etc-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840396 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-env-overrides\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840417 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a785ed51-b59b-4ec7-b31c-a66279b9151c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-socket-dir-parent\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-etc-kubernetes\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840463 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-netns\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-kubelet\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-var-lib-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840787 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-systemd-units\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-netns\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840841 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840859 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cdd\" (UniqueName: \"kubernetes.io/projected/10b18da7-d236-4556-a9ef-7d582b3ed224-kube-api-access-66cdd\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-os-release\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-cni-bin\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840946 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840971 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ee11e6-3caf-46f7-8321-84633755d718-ovn-node-metrics-cert\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6jr\" (UniqueName: \"kubernetes.io/projected/c7c33249-8ce6-49e4-a8a1-91bffa192e26-kube-api-access-bv6jr\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841073 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-conf-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841096 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-log-socket\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841120 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-bin\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc269c97-37cd-4773-ab83-9e47b2666fb4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfdq\" (UniqueName: \"kubernetes.io/projected/b3ee11e6-3caf-46f7-8321-84633755d718-kube-api-access-kqfdq\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxr8w\" (UniqueName: \"kubernetes.io/projected/349416ea-6932-403a-8e28-2ab01e85402c-kube-api-access-kxr8w\") pod \"node-resolver-657xn\" (UID: \"349416ea-6932-403a-8e28-2ab01e85402c\") " pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841220 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bc5d5523-b52d-4739-9a22-3abb886d7f0d-cni-binary-copy\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/349416ea-6932-403a-8e28-2ab01e85402c-hosts-file\") pod \"node-resolver-657xn\" (UID: \"349416ea-6932-403a-8e28-2ab01e85402c\") " pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841295 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc269c97-37cd-4773-ab83-9e47b2666fb4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841324 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841348 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/10b18da7-d236-4556-a9ef-7d582b3ed224-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841387 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c7c33249-8ce6-49e4-a8a1-91bffa192e26-serviceca\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841411 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drz6l\" (UniqueName: \"kubernetes.io/projected/cc269c97-37cd-4773-ab83-9e47b2666fb4-kube-api-access-drz6l\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841457 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-netd\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841479 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/10b18da7-d236-4556-a9ef-7d582b3ed224-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-daemon-config\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841524 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5zj\" (UniqueName: \"kubernetes.io/projected/bc5d5523-b52d-4739-9a22-3abb886d7f0d-kube-api-access-ww5zj\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-kubelet\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-slash\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841671 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bbzb\" (UniqueName: \"kubernetes.io/projected/bb883751-bda8-4227-99fe-74d0b85cff17-kube-api-access-4bbzb\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc269c97-37cd-4773-ab83-9e47b2666fb4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-cnibin\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-script-lib\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841816 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841836 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7c33249-8ce6-49e4-a8a1-91bffa192e26-host\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841859 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-multus-certs\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-ovn\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-config\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a785ed51-b59b-4ec7-b31c-a66279b9151c-rootfs\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841969 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-hostroot\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841989 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a785ed51-b59b-4ec7-b31c-a66279b9151c-proxy-tls\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-node-log\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842060 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842087 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-system-cni-dir\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842136 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-os-release\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-cnibin\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842271 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842416 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842746 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842786 4687 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842830 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842844 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842859 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842929 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842948 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842985 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843004 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843021 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843087 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843105 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843119 4687 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843134 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843148 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843161 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843174 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843187 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843202 4687 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843217 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843231 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843245 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843263 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843276 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843291 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843307 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843323 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843337 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843352 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843393 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843410 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843424 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843439 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843453 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843468 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843481 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843496 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843510 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843525 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843538 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843550 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843564 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843578 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843594 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.844074 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.845269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.846702 4687 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839721 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.839787 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.840806 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841068 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841106 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841120 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841479 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841597 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841670 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.841869 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842007 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842179 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842324 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842384 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842394 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842765 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842878 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.847519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.848039 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.848949 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.849063 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.849218 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.849309 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.849415 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.850072 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.850309 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.850497 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.850579 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.850866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.850871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843228 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843341 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843450 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843480 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843683 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843759 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843894 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.844057 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.844066 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.843749 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.844209 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:03:44.344186436 +0000 UTC m=+73.308148790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.856141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.856206 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.857511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.857745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.857795 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.857847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.858020 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.858119 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.858904 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.858939 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859458 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859763 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.859945 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.860002 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.860021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.860143 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.860339 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.860969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.860994 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.861147 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.861542 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.861549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.844853 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.845121 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.845181 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.845345 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.845477 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.846424 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.861697 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.861960 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.861991 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.862197 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.862585 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.862814 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.862898 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.862913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.863234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.863524 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.863971 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.864281 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.864608 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.865142 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.865176 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.865210 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.865095 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.865702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.866414 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.866578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.866909 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.867184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.844379 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.868140 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.868510 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.842985 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.869446 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.870305 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.870696 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.870733 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.870823 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.870859 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.870246 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.871269 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.871753 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:44.371415402 +0000 UTC m=+73.335377746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.871853 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:44.371842233 +0000 UTC m=+73.335804577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.872969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.873168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.873174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.873467 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.873475 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.874081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.874113 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.874528 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.874742 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.874840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.874517 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875123 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875390 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875676 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.876116 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.876438 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.876812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.876832 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.876870 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.876935 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.876948 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.877040 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:44.377017913 +0000 UTC m=+73.340980257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.876904 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.875924 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.877175 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.878829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.879536 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.879774 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.880619 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.881688 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.881745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.882218 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.882380 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.882667 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.882698 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.882711 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.882756 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:44.382741318 +0000 UTC m=+73.346703662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.882924 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.883207 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.883238 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.883277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.884062 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.885297 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.885612 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.886382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.886637 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.887561 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.892030 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.898130 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.905555 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.908478 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.912431 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.913660 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.914492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.914527 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.914538 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.914557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.914570 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:43Z","lastTransitionTime":"2026-03-12T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.919803 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.927340 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.934681 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-cnibin\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-script-lib\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-ovn\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-config\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944116 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a785ed51-b59b-4ec7-b31c-a66279b9151c-rootfs\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944152 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7c33249-8ce6-49e4-a8a1-91bffa192e26-host\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944212 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-multus-certs\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944233 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-hostroot\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944254 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a785ed51-b59b-4ec7-b31c-a66279b9151c-proxy-tls\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944279 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-ovn\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7c33249-8ce6-49e4-a8a1-91bffa192e26-host\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944344 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944387 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a785ed51-b59b-4ec7-b31c-a66279b9151c-rootfs\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944354 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-cnibin\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944403 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-hostroot\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-multus-certs\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944499 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944297 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944830 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-system-cni-dir\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944854 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-os-release\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-cnibin\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-node-log\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944911 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-script-lib\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-config\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944931 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-node-log\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.944984 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-system-cni-dir\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945017 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-systemd\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945038 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-cni-multus\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-etc-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945049 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-cnibin\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.944994 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-systemd\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945098 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-cni-multus\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-env-overrides\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945133 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-etc-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: E0312 16:03:43.945155 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs podName:bb883751-bda8-4227-99fe-74d0b85cff17 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:44.445129883 +0000 UTC m=+73.409092247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs") pod "network-metrics-daemon-d4g6l" (UID: "bb883751-bda8-4227-99fe-74d0b85cff17") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a785ed51-b59b-4ec7-b31c-a66279b9151c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945208 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945226 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4ghw\" (UniqueName: \"kubernetes.io/projected/a785ed51-b59b-4ec7-b31c-a66279b9151c-kube-api-access-n4ghw\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945254 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-system-cni-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-cni-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945288 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-k8s-cni-cncf-io\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-socket-dir-parent\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-etc-kubernetes\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945347 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-netns\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945392 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-kubelet\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945410 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-var-lib-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-system-cni-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-socket-dir-parent\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945461 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-etc-kubernetes\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945495 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-netns\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-cni-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945516 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-kubelet\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cdd\" (UniqueName: \"kubernetes.io/projected/10b18da7-d236-4556-a9ef-7d582b3ed224-kube-api-access-66cdd\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-var-lib-openvswitch\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-run-k8s-cni-cncf-io\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-os-release\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/10b18da7-d236-4556-a9ef-7d582b3ed224-os-release\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945586 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-cni-bin\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-systemd-units\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945613 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-os-release\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-netns\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945644 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-host-var-lib-cni-bin\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-systemd-units\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945656 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-conf-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-conf-dir\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945690 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945670 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-netns\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-log-socket\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-bin\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945745 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-bin\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-log-socket\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ee11e6-3caf-46f7-8321-84633755d718-ovn-node-metrics-cert\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv6jr\" (UniqueName: \"kubernetes.io/projected/c7c33249-8ce6-49e4-a8a1-91bffa192e26-kube-api-access-bv6jr\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945847 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc269c97-37cd-4773-ab83-9e47b2666fb4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945865 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfdq\" (UniqueName: \"kubernetes.io/projected/b3ee11e6-3caf-46f7-8321-84633755d718-kube-api-access-kqfdq\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxr8w\" (UniqueName: \"kubernetes.io/projected/349416ea-6932-403a-8e28-2ab01e85402c-kube-api-access-kxr8w\") pod \"node-resolver-657xn\" (UID: \"349416ea-6932-403a-8e28-2ab01e85402c\") " pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bc5d5523-b52d-4739-9a22-3abb886d7f0d-cni-binary-copy\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/349416ea-6932-403a-8e28-2ab01e85402c-hosts-file\") pod \"node-resolver-657xn\" (UID: \"349416ea-6932-403a-8e28-2ab01e85402c\") " pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/10b18da7-d236-4556-a9ef-7d582b3ed224-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945943 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c7c33249-8ce6-49e4-a8a1-91bffa192e26-serviceca\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc269c97-37cd-4773-ab83-9e47b2666fb4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945978 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drz6l\" (UniqueName: \"kubernetes.io/projected/cc269c97-37cd-4773-ab83-9e47b2666fb4-kube-api-access-drz6l\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww5zj\" (UniqueName: \"kubernetes.io/projected/bc5d5523-b52d-4739-9a22-3abb886d7f0d-kube-api-access-ww5zj\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946013 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-kubelet\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946040 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-slash\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946056 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946071 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-netd\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946089 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/10b18da7-d236-4556-a9ef-7d582b3ed224-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-daemon-config\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946120 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bbzb\" (UniqueName: \"kubernetes.io/projected/bb883751-bda8-4227-99fe-74d0b85cff17-kube-api-access-4bbzb\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946137 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc269c97-37cd-4773-ab83-9e47b2666fb4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946221 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946232 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946242 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946251 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946260 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946269 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946278 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946288 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946299 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946308 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946383 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc269c97-37cd-4773-ab83-9e47b2666fb4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946419 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-slash\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946317 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946450 4687 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946461 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946474 4687 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946486 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946496 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946505 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946516 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946525 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946535 4687 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946545 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946554 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946564 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946573 4687 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946583 4687 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946593 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946603 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946611 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948164 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948179 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948189 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948199 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948207 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948217 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948227 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948238 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948250 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948263 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948273 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948281 4687 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948293 4687 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948303 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948313 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948322 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948330 4687 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948338 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948347 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948373 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948381 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948390 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948399 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948409 4687 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948419 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948429 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948437 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948446 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948455 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948463 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948470 4687 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948479 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948488 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948498 4687 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948507 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948518 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948531 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948540 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948550 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948558 4687 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948567 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948576 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948584 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948593 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948601 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.947557 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/349416ea-6932-403a-8e28-2ab01e85402c-hosts-file\") pod \"node-resolver-657xn\" (UID: \"349416ea-6932-403a-8e28-2ab01e85402c\") " pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948084 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/10b18da7-d236-4556-a9ef-7d582b3ed224-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948610 4687 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948114 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-kubelet\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.947545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/10b18da7-d236-4556-a9ef-7d582b3ed224-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948661 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.947048 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc269c97-37cd-4773-ab83-9e47b2666fb4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.946892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c7c33249-8ce6-49e4-a8a1-91bffa192e26-serviceca\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948682 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.945820 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-env-overrides\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948700 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.947594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-netd\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.947487 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a785ed51-b59b-4ec7-b31c-a66279b9151c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948711 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948739 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948755 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948767 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948781 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948793 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948805 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948818 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948839 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948853 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948864 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948877 4687 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948887 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948898 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.948908 4687 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949085 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949102 4687 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949118 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949134 4687 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949146 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949157 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949170 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949181 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949194 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949206 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949218 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949230 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949242 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949254 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949612 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ee11e6-3caf-46f7-8321-84633755d718-ovn-node-metrics-cert\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949831 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949850 4687 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949865 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949916 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949929 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949941 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949952 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949964 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949978 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.949989 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.950001 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.950014 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.951503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bc5d5523-b52d-4739-9a22-3abb886d7f0d-cni-binary-copy\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.956388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bc5d5523-b52d-4739-9a22-3abb886d7f0d-multus-daemon-config\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.947523 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.960124 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfdq\" (UniqueName: \"kubernetes.io/projected/b3ee11e6-3caf-46f7-8321-84633755d718-kube-api-access-kqfdq\") pod \"ovnkube-node-nj99p\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.960727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cdd\" (UniqueName: \"kubernetes.io/projected/10b18da7-d236-4556-a9ef-7d582b3ed224-kube-api-access-66cdd\") pod \"multus-additional-cni-plugins-gnhsw\" (UID: \"10b18da7-d236-4556-a9ef-7d582b3ed224\") " pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.960784 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.960966 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961028 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961088 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961150 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961208 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961274 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961331 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961541 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961638 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961730 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961833 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961918 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962008 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962520 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962654 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.961542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc269c97-37cd-4773-ab83-9e47b2666fb4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962736 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962791 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962804 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962815 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962824 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962834 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962844 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962853 4687 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962865 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962875 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962886 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962897 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962906 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.962916 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.963513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drz6l\" (UniqueName: \"kubernetes.io/projected/cc269c97-37cd-4773-ab83-9e47b2666fb4-kube-api-access-drz6l\") pod \"ovnkube-control-plane-749d76644c-hj9kq\" (UID: \"cc269c97-37cd-4773-ab83-9e47b2666fb4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.963530 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxr8w\" (UniqueName: \"kubernetes.io/projected/349416ea-6932-403a-8e28-2ab01e85402c-kube-api-access-kxr8w\") pod \"node-resolver-657xn\" (UID: \"349416ea-6932-403a-8e28-2ab01e85402c\") " pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.967636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv6jr\" (UniqueName: \"kubernetes.io/projected/c7c33249-8ce6-49e4-a8a1-91bffa192e26-kube-api-access-bv6jr\") pod \"node-ca-dt4jw\" (UID: \"c7c33249-8ce6-49e4-a8a1-91bffa192e26\") " pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.968094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a785ed51-b59b-4ec7-b31c-a66279b9151c-proxy-tls\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.970852 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4ghw\" (UniqueName: \"kubernetes.io/projected/a785ed51-b59b-4ec7-b31c-a66279b9151c-kube-api-access-n4ghw\") pod \"machine-config-daemon-bxjh2\" (UID: \"a785ed51-b59b-4ec7-b31c-a66279b9151c\") " pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.971135 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bbzb\" (UniqueName: \"kubernetes.io/projected/bb883751-bda8-4227-99fe-74d0b85cff17-kube-api-access-4bbzb\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:43 crc kubenswrapper[4687]: I0312 16:03:43.973384 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww5zj\" (UniqueName: \"kubernetes.io/projected/bc5d5523-b52d-4739-9a22-3abb886d7f0d-kube-api-access-ww5zj\") pod \"multus-9k44w\" (UID: \"bc5d5523-b52d-4739-9a22-3abb886d7f0d\") " pod="openshift-multus/multus-9k44w" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.016882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.016996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.017053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.017126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.017187 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.050762 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.062735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.066805 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ -f "/env/_master" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: source "/env/_master" Mar 12 16:03:44 crc kubenswrapper[4687]: set +o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 12 16:03:44 crc kubenswrapper[4687]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 12 16:03:44 crc kubenswrapper[4687]: ho_enable="--enable-hybrid-overlay" Mar 12 16:03:44 crc kubenswrapper[4687]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 12 16:03:44 crc kubenswrapper[4687]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 12 16:03:44 crc kubenswrapper[4687]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 12 16:03:44 crc kubenswrapper[4687]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 16:03:44 crc kubenswrapper[4687]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --webhook-host=127.0.0.1 \ Mar 12 16:03:44 crc kubenswrapper[4687]: --webhook-port=9743 \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${ho_enable} \ Mar 12 16:03:44 crc kubenswrapper[4687]: --enable-interconnect \ Mar 12 16:03:44 crc kubenswrapper[4687]: --disable-approver \ Mar 12 16:03:44 crc kubenswrapper[4687]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --wait-for-kubernetes-api=200s \ Mar 12 16:03:44 crc kubenswrapper[4687]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --loglevel="${LOGLEVEL}" Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.068851 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ -f "/env/_master" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: source "/env/_master" Mar 12 16:03:44 crc kubenswrapper[4687]: set +o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 12 16:03:44 crc kubenswrapper[4687]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 16:03:44 crc kubenswrapper[4687]: --disable-webhook \ Mar 12 16:03:44 crc kubenswrapper[4687]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --loglevel="${LOGLEVEL}" Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.070840 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.072327 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-65b49a5f6483cfb77be5bfa31fbc3a8dfabf18aa71a10f5ccc6658e5af0b9431 WatchSource:0}: Error finding container 65b49a5f6483cfb77be5bfa31fbc3a8dfabf18aa71a10f5ccc6658e5af0b9431: Status 404 returned error can't find the container with id 65b49a5f6483cfb77be5bfa31fbc3a8dfabf18aa71a10f5ccc6658e5af0b9431 Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.075199 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.075305 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.076939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.084471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9k44w" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.088621 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e4f76f76d5ee1f59e55d11eb80e08260f82cbbf756d3f6e847b5bbc54e8af104 WatchSource:0}: Error finding container e4f76f76d5ee1f59e55d11eb80e08260f82cbbf756d3f6e847b5bbc54e8af104: Status 404 returned error can't find the container with id e4f76f76d5ee1f59e55d11eb80e08260f82cbbf756d3f6e847b5bbc54e8af104 Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.090649 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.091391 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 12 16:03:44 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: source /etc/kubernetes/apiserver-url.env Mar 12 16:03:44 crc kubenswrapper[4687]: else Mar 12 16:03:44 crc kubenswrapper[4687]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 12 16:03:44 crc kubenswrapper[4687]: exit 1 Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.092741 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.097615 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.099234 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc5d5523_b52d_4739_9a22_3abb886d7f0d.slice/crio-45687c75a7b9ac1aa03c1fa69ccb199465f0fa2e4a5bfe0f2ea558c01ba4b330 WatchSource:0}: Error finding container 45687c75a7b9ac1aa03c1fa69ccb199465f0fa2e4a5bfe0f2ea558c01ba4b330: Status 404 returned error can't find the container with id 45687c75a7b9ac1aa03c1fa69ccb199465f0fa2e4a5bfe0f2ea558c01ba4b330 Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.106033 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-657xn" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.106526 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 12 16:03:44 crc kubenswrapper[4687]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 12 16:03:44 crc kubenswrapper[4687]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww5zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9k44w_openshift-multus(bc5d5523-b52d-4739-9a22-3abb886d7f0d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.107243 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b18da7_d236_4556_a9ef_7d582b3ed224.slice/crio-61f27f12457493d82ada2105508dd643eed0e70e372fe7644f6a218e5dbf1aa0 WatchSource:0}: Error finding container 61f27f12457493d82ada2105508dd643eed0e70e372fe7644f6a218e5dbf1aa0: Status 404 returned error can't find the container with id 61f27f12457493d82ada2105508dd643eed0e70e372fe7644f6a218e5dbf1aa0 Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.108441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9k44w" podUID="bc5d5523-b52d-4739-9a22-3abb886d7f0d" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.111730 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dt4jw" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.114539 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3ee11e6_3caf_46f7_8321_84633755d718.slice/crio-51d6ef96f5fbddf92d9d64080adc939e996c4b3ee4e8b50de71cd7ba44902610 WatchSource:0}: Error finding container 51d6ef96f5fbddf92d9d64080adc939e996c4b3ee4e8b50de71cd7ba44902610: Status 404 returned error can't find the container with id 51d6ef96f5fbddf92d9d64080adc939e996c4b3ee4e8b50de71cd7ba44902610 Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.117487 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.117814 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66cdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gnhsw_openshift-multus(10b18da7-d236-4556-a9ef-7d582b3ed224): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.117949 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 12 16:03:44 crc kubenswrapper[4687]: apiVersion: v1 Mar 12 16:03:44 crc kubenswrapper[4687]: clusters: Mar 12 16:03:44 crc kubenswrapper[4687]: - cluster: Mar 12 16:03:44 crc kubenswrapper[4687]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 12 16:03:44 crc kubenswrapper[4687]: server: https://api-int.crc.testing:6443 Mar 12 16:03:44 crc kubenswrapper[4687]: name: default-cluster Mar 12 16:03:44 crc kubenswrapper[4687]: contexts: Mar 12 16:03:44 crc kubenswrapper[4687]: - context: Mar 12 16:03:44 crc kubenswrapper[4687]: cluster: default-cluster Mar 12 16:03:44 crc kubenswrapper[4687]: namespace: default Mar 12 16:03:44 crc kubenswrapper[4687]: user: default-auth Mar 12 16:03:44 crc kubenswrapper[4687]: name: default-context Mar 12 16:03:44 crc kubenswrapper[4687]: current-context: default-context Mar 12 16:03:44 crc kubenswrapper[4687]: kind: Config Mar 12 16:03:44 crc kubenswrapper[4687]: preferences: {} Mar 12 16:03:44 crc kubenswrapper[4687]: users: Mar 12 16:03:44 crc kubenswrapper[4687]: - name: default-auth Mar 12 16:03:44 crc kubenswrapper[4687]: user: Mar 12 16:03:44 crc kubenswrapper[4687]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 12 16:03:44 crc kubenswrapper[4687]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 12 16:03:44 crc kubenswrapper[4687]: EOF Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqfdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nj99p_openshift-ovn-kubernetes(b3ee11e6-3caf-46f7-8321-84633755d718): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.118955 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" podUID="10b18da7-d236-4556-a9ef-7d582b3ed224" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.119007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.119034 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.119047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.119066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.119076 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.119014 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.120616 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.125530 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7c33249_8ce6_49e4_a8a1_91bffa192e26.slice/crio-6826932d1776cfcadc4830dc63aa0b83ec06d3b1d8452e6eab6ab9646ce4981f WatchSource:0}: Error finding container 6826932d1776cfcadc4830dc63aa0b83ec06d3b1d8452e6eab6ab9646ce4981f: Status 404 returned error can't find the container with id 6826932d1776cfcadc4830dc63aa0b83ec06d3b1d8452e6eab6ab9646ce4981f Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.129234 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 12 16:03:44 crc kubenswrapper[4687]: while [ true ]; Mar 12 16:03:44 crc kubenswrapper[4687]: do Mar 12 16:03:44 crc kubenswrapper[4687]: for f in $(ls /tmp/serviceca); do Mar 12 16:03:44 crc kubenswrapper[4687]: echo $f Mar 12 16:03:44 crc kubenswrapper[4687]: ca_file_path="/tmp/serviceca/${f}" Mar 12 16:03:44 crc kubenswrapper[4687]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 12 16:03:44 crc kubenswrapper[4687]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 12 16:03:44 crc kubenswrapper[4687]: if [ -e "${reg_dir_path}" ]; then Mar 12 16:03:44 crc kubenswrapper[4687]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 12 16:03:44 crc kubenswrapper[4687]: else Mar 12 16:03:44 crc kubenswrapper[4687]: mkdir $reg_dir_path Mar 12 16:03:44 crc kubenswrapper[4687]: cp $ca_file_path $reg_dir_path/ca.crt Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: for d in $(ls /etc/docker/certs.d); do Mar 12 16:03:44 crc kubenswrapper[4687]: echo $d Mar 12 16:03:44 crc kubenswrapper[4687]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 12 16:03:44 crc kubenswrapper[4687]: reg_conf_path="/tmp/serviceca/${dp}" Mar 12 16:03:44 crc kubenswrapper[4687]: if [ ! -e "${reg_conf_path}" ]; then Mar 12 16:03:44 crc kubenswrapper[4687]: rm -rf /etc/docker/certs.d/$d Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: sleep 60 & wait ${!} Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv6jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-dt4jw_openshift-image-registry(c7c33249-8ce6-49e4-a8a1-91bffa192e26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.130495 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-dt4jw" podUID="c7c33249-8ce6-49e4-a8a1-91bffa192e26" Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.132072 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod349416ea_6932_403a_8e28_2ab01e85402c.slice/crio-0be8eba51f2dc292b964c16696306a660c451905269b6cb44f7319cd446706f5 WatchSource:0}: Error finding container 0be8eba51f2dc292b964c16696306a660c451905269b6cb44f7319cd446706f5: Status 404 returned error can't find the container with id 0be8eba51f2dc292b964c16696306a660c451905269b6cb44f7319cd446706f5 Mar 12 16:03:44 crc kubenswrapper[4687]: W0312 16:03:44.134082 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc269c97_37cd_4773_ab83_9e47b2666fb4.slice/crio-6ffcf1a9ab743cae52931087cd5ec84e1a783b45cd92f3a6fcc0b5ce016b435e WatchSource:0}: Error finding container 6ffcf1a9ab743cae52931087cd5ec84e1a783b45cd92f3a6fcc0b5ce016b435e: Status 404 returned error can't find the container with id 6ffcf1a9ab743cae52931087cd5ec84e1a783b45cd92f3a6fcc0b5ce016b435e Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.134885 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 12 16:03:44 crc kubenswrapper[4687]: set -uo pipefail Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 12 16:03:44 crc kubenswrapper[4687]: HOSTS_FILE="/etc/hosts" Mar 12 16:03:44 crc kubenswrapper[4687]: TEMP_FILE="/etc/hosts.tmp" Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: # Make a temporary file with the old hosts file's attributes. Mar 12 16:03:44 crc kubenswrapper[4687]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 12 16:03:44 crc kubenswrapper[4687]: echo "Failed to preserve hosts file. Exiting." Mar 12 16:03:44 crc kubenswrapper[4687]: exit 1 Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: while true; do Mar 12 16:03:44 crc kubenswrapper[4687]: declare -A svc_ips Mar 12 16:03:44 crc kubenswrapper[4687]: for svc in "${services[@]}"; do Mar 12 16:03:44 crc kubenswrapper[4687]: # Fetch service IP from cluster dns if present. We make several tries Mar 12 16:03:44 crc kubenswrapper[4687]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 12 16:03:44 crc kubenswrapper[4687]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 12 16:03:44 crc kubenswrapper[4687]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 12 16:03:44 crc kubenswrapper[4687]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 12 16:03:44 crc kubenswrapper[4687]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 12 16:03:44 crc kubenswrapper[4687]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 12 16:03:44 crc kubenswrapper[4687]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 12 16:03:44 crc kubenswrapper[4687]: for i in ${!cmds[*]} Mar 12 16:03:44 crc kubenswrapper[4687]: do Mar 12 16:03:44 crc kubenswrapper[4687]: ips=($(eval "${cmds[i]}")) Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: svc_ips["${svc}"]="${ips[@]}" Mar 12 16:03:44 crc kubenswrapper[4687]: break Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: # Update /etc/hosts only if we get valid service IPs Mar 12 16:03:44 crc kubenswrapper[4687]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 12 16:03:44 crc kubenswrapper[4687]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 12 16:03:44 crc kubenswrapper[4687]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 12 16:03:44 crc kubenswrapper[4687]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 12 16:03:44 crc kubenswrapper[4687]: sleep 60 & wait Mar 12 16:03:44 crc kubenswrapper[4687]: continue Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: # Append resolver entries for services Mar 12 16:03:44 crc kubenswrapper[4687]: rc=0 Mar 12 16:03:44 crc kubenswrapper[4687]: for svc in "${!svc_ips[@]}"; do Mar 12 16:03:44 crc kubenswrapper[4687]: for ip in ${svc_ips[${svc}]}; do Mar 12 16:03:44 crc kubenswrapper[4687]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ $rc -ne 0 ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: sleep 60 & wait Mar 12 16:03:44 crc kubenswrapper[4687]: continue Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 12 16:03:44 crc kubenswrapper[4687]: # Replace /etc/hosts with our modified version if needed Mar 12 16:03:44 crc kubenswrapper[4687]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 12 16:03:44 crc kubenswrapper[4687]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: sleep 60 & wait Mar 12 16:03:44 crc kubenswrapper[4687]: unset svc_ips Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxr8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-657xn_openshift-dns(349416ea-6932-403a-8e28-2ab01e85402c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.135939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-657xn" podUID="349416ea-6932-403a-8e28-2ab01e85402c" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.138174 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 12 16:03:44 crc kubenswrapper[4687]: set -euo pipefail Mar 12 16:03:44 crc kubenswrapper[4687]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 12 16:03:44 crc kubenswrapper[4687]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 12 16:03:44 crc kubenswrapper[4687]: # As the secret mount is optional we must wait for the files to be present. Mar 12 16:03:44 crc kubenswrapper[4687]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 12 16:03:44 crc kubenswrapper[4687]: TS=$(date +%s) Mar 12 16:03:44 crc kubenswrapper[4687]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 12 16:03:44 crc kubenswrapper[4687]: HAS_LOGGED_INFO=0 Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: log_missing_certs(){ Mar 12 16:03:44 crc kubenswrapper[4687]: CUR_TS=$(date +%s) Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 12 16:03:44 crc kubenswrapper[4687]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 12 16:03:44 crc kubenswrapper[4687]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 12 16:03:44 crc kubenswrapper[4687]: HAS_LOGGED_INFO=1 Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: } Mar 12 16:03:44 crc kubenswrapper[4687]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 12 16:03:44 crc kubenswrapper[4687]: log_missing_certs Mar 12 16:03:44 crc kubenswrapper[4687]: sleep 5 Mar 12 16:03:44 crc kubenswrapper[4687]: done Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 12 16:03:44 crc kubenswrapper[4687]: exec /usr/bin/kube-rbac-proxy \ Mar 12 16:03:44 crc kubenswrapper[4687]: --logtostderr \ Mar 12 16:03:44 crc kubenswrapper[4687]: --secure-listen-address=:9108 \ Mar 12 16:03:44 crc kubenswrapper[4687]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 12 16:03:44 crc kubenswrapper[4687]: --upstream=http://127.0.0.1:29108/ \ Mar 12 16:03:44 crc kubenswrapper[4687]: --tls-private-key-file=${TLS_PK} \ Mar 12 16:03:44 crc kubenswrapper[4687]: --tls-cert-file=${TLS_CERT} Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drz6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hj9kq_openshift-ovn-kubernetes(cc269c97-37cd-4773-ab83-9e47b2666fb4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.140961 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:44 crc kubenswrapper[4687]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ -f "/env/_master" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: source "/env/_master" Mar 12 16:03:44 crc kubenswrapper[4687]: set +o allexport Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v4_join_subnet_opt= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v6_join_subnet_opt= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v4_transit_switch_subnet_opt= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v6_transit_switch_subnet_opt= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: dns_name_resolver_enabled_flag= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "false" == "true" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: persistent_ips_enabled_flag= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "true" == "true" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: # This is needed so that converting clusters from GA to TP Mar 12 16:03:44 crc kubenswrapper[4687]: # will rollout control plane pods as well Mar 12 16:03:44 crc kubenswrapper[4687]: network_segmentation_enabled_flag= Mar 12 16:03:44 crc kubenswrapper[4687]: multi_network_enabled_flag= Mar 12 16:03:44 crc kubenswrapper[4687]: if [[ "true" == "true" ]]; then Mar 12 16:03:44 crc kubenswrapper[4687]: multi_network_enabled_flag="--enable-multi-network" Mar 12 16:03:44 crc kubenswrapper[4687]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 12 16:03:44 crc kubenswrapper[4687]: fi Mar 12 16:03:44 crc kubenswrapper[4687]: Mar 12 16:03:44 crc kubenswrapper[4687]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 12 16:03:44 crc kubenswrapper[4687]: exec /usr/bin/ovnkube \ Mar 12 16:03:44 crc kubenswrapper[4687]: --enable-interconnect \ Mar 12 16:03:44 crc kubenswrapper[4687]: --init-cluster-manager "${K8S_NODE}" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 12 16:03:44 crc kubenswrapper[4687]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --metrics-bind-address "127.0.0.1:29108" \ Mar 12 16:03:44 crc kubenswrapper[4687]: --metrics-enable-pprof \ Mar 12 16:03:44 crc kubenswrapper[4687]: --metrics-enable-config-duration \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${ovn_v4_join_subnet_opt} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${ovn_v6_join_subnet_opt} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${dns_name_resolver_enabled_flag} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${persistent_ips_enabled_flag} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${multi_network_enabled_flag} \ Mar 12 16:03:44 crc kubenswrapper[4687]: ${network_segmentation_enabled_flag} Mar 12 16:03:44 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drz6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hj9kq_openshift-ovn-kubernetes(cc269c97-37cd-4773-ab83-9e47b2666fb4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:44 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.142294 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" podUID="cc269c97-37cd-4773-ab83-9e47b2666fb4" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.142595 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4ghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.145557 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4ghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.146777 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.221537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.221577 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.221589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.221603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.221615 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.324436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.324496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.324506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.324523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.324532 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.366892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.367103 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:03:45.367070583 +0000 UTC m=+74.331032927 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.428040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.428072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.428080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.428094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.428104 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.467349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.467400 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.467423 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.467466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.467487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467537 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467570 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467572 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467591 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467611 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:45.467597809 +0000 UTC m=+74.431560153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467648 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:45.46763017 +0000 UTC m=+74.431592504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467702 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467726 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs podName:bb883751-bda8-4227-99fe-74d0b85cff17 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:45.467720032 +0000 UTC m=+74.431682376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs") pod "network-metrics-daemon-d4g6l" (UID: "bb883751-bda8-4227-99fe-74d0b85cff17") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467757 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467779 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:45.467773683 +0000 UTC m=+74.431736027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467819 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467829 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467836 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.467854 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:45.467848866 +0000 UTC m=+74.431811200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.530223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.530282 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.530299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.530324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.530341 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.633148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.633202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.633219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.633241 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.633258 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.736629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.736701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.736725 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.736754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.736772 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.752057 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.766507 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.766837 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:03:44 crc kubenswrapper[4687]: E0312 16:03:44.767103 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.839920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.839998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.840018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.840041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.840059 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.942681 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.942718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.942727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.942742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:44 crc kubenswrapper[4687]: I0312 16:03:44.942752 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:44Z","lastTransitionTime":"2026-03-12T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.010918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9k44w" event={"ID":"bc5d5523-b52d-4739-9a22-3abb886d7f0d","Type":"ContainerStarted","Data":"45687c75a7b9ac1aa03c1fa69ccb199465f0fa2e4a5bfe0f2ea558c01ba4b330"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.012298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerStarted","Data":"61f27f12457493d82ada2105508dd643eed0e70e372fe7644f6a218e5dbf1aa0"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.013874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bd04669e79f404a779b2aa19c8fa9b032d03f119ac35b553c200d08ee40c391e"} Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.013975 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 12 16:03:45 crc kubenswrapper[4687]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 12 16:03:45 crc kubenswrapper[4687]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww5zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9k44w_openshift-multus(bc5d5523-b52d-4739-9a22-3abb886d7f0d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.014237 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66cdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-gnhsw_openshift-multus(10b18da7-d236-4556-a9ef-7d582b3ed224): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.015432 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" podUID="10b18da7-d236-4556-a9ef-7d582b3ed224" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.015494 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ -f "/env/_master" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: source "/env/_master" Mar 12 16:03:45 crc kubenswrapper[4687]: set +o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 12 16:03:45 crc kubenswrapper[4687]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 12 16:03:45 crc kubenswrapper[4687]: ho_enable="--enable-hybrid-overlay" Mar 12 16:03:45 crc kubenswrapper[4687]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 12 16:03:45 crc kubenswrapper[4687]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 12 16:03:45 crc kubenswrapper[4687]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 12 16:03:45 crc kubenswrapper[4687]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 16:03:45 crc kubenswrapper[4687]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --webhook-host=127.0.0.1 \ Mar 12 16:03:45 crc kubenswrapper[4687]: --webhook-port=9743 \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${ho_enable} \ Mar 12 16:03:45 crc kubenswrapper[4687]: --enable-interconnect \ Mar 12 16:03:45 crc kubenswrapper[4687]: --disable-approver \ Mar 12 16:03:45 crc kubenswrapper[4687]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --wait-for-kubernetes-api=200s \ Mar 12 16:03:45 crc kubenswrapper[4687]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --loglevel="${LOGLEVEL}" Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.015555 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9k44w" podUID="bc5d5523-b52d-4739-9a22-3abb886d7f0d" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.021740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" event={"ID":"cc269c97-37cd-4773-ab83-9e47b2666fb4","Type":"ContainerStarted","Data":"6ffcf1a9ab743cae52931087cd5ec84e1a783b45cd92f3a6fcc0b5ce016b435e"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.022637 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.023753 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ -f "/env/_master" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: source "/env/_master" Mar 12 16:03:45 crc kubenswrapper[4687]: set +o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 12 16:03:45 crc kubenswrapper[4687]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 12 16:03:45 crc kubenswrapper[4687]: --disable-webhook \ Mar 12 16:03:45 crc kubenswrapper[4687]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --loglevel="${LOGLEVEL}" Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.024457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dt4jw" event={"ID":"c7c33249-8ce6-49e4-a8a1-91bffa192e26","Type":"ContainerStarted","Data":"6826932d1776cfcadc4830dc63aa0b83ec06d3b1d8452e6eab6ab9646ce4981f"} Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.024833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.025517 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"51d6ef96f5fbddf92d9d64080adc939e996c4b3ee4e8b50de71cd7ba44902610"} Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.025751 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 12 16:03:45 crc kubenswrapper[4687]: while [ true ]; Mar 12 16:03:45 crc kubenswrapper[4687]: do Mar 12 16:03:45 crc kubenswrapper[4687]: for f in $(ls /tmp/serviceca); do Mar 12 16:03:45 crc kubenswrapper[4687]: echo $f Mar 12 16:03:45 crc kubenswrapper[4687]: ca_file_path="/tmp/serviceca/${f}" Mar 12 16:03:45 crc kubenswrapper[4687]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 12 16:03:45 crc kubenswrapper[4687]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 12 16:03:45 crc kubenswrapper[4687]: if [ -e "${reg_dir_path}" ]; then Mar 12 16:03:45 crc kubenswrapper[4687]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 12 16:03:45 crc kubenswrapper[4687]: else Mar 12 16:03:45 crc kubenswrapper[4687]: mkdir $reg_dir_path Mar 12 16:03:45 crc kubenswrapper[4687]: cp $ca_file_path $reg_dir_path/ca.crt Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: for d in $(ls /etc/docker/certs.d); do Mar 12 16:03:45 crc kubenswrapper[4687]: echo $d Mar 12 16:03:45 crc kubenswrapper[4687]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 12 16:03:45 crc kubenswrapper[4687]: reg_conf_path="/tmp/serviceca/${dp}" Mar 12 16:03:45 crc kubenswrapper[4687]: if [ ! -e "${reg_conf_path}" ]; then Mar 12 16:03:45 crc kubenswrapper[4687]: rm -rf /etc/docker/certs.d/$d Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: sleep 60 & wait ${!} Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv6jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-dt4jw_openshift-image-registry(c7c33249-8ce6-49e4-a8a1-91bffa192e26): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.027838 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-dt4jw" podUID="c7c33249-8ce6-49e4-a8a1-91bffa192e26" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.028414 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 12 16:03:45 crc kubenswrapper[4687]: apiVersion: v1 Mar 12 16:03:45 crc kubenswrapper[4687]: clusters: Mar 12 16:03:45 crc kubenswrapper[4687]: - cluster: Mar 12 16:03:45 crc kubenswrapper[4687]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 12 16:03:45 crc kubenswrapper[4687]: server: https://api-int.crc.testing:6443 Mar 12 16:03:45 crc kubenswrapper[4687]: name: default-cluster Mar 12 16:03:45 crc kubenswrapper[4687]: contexts: Mar 12 16:03:45 crc kubenswrapper[4687]: - context: Mar 12 16:03:45 crc kubenswrapper[4687]: cluster: default-cluster Mar 12 16:03:45 crc kubenswrapper[4687]: namespace: default Mar 12 16:03:45 crc kubenswrapper[4687]: user: default-auth Mar 12 16:03:45 crc kubenswrapper[4687]: name: default-context Mar 12 16:03:45 crc kubenswrapper[4687]: current-context: default-context Mar 12 16:03:45 crc kubenswrapper[4687]: kind: Config Mar 12 16:03:45 crc kubenswrapper[4687]: preferences: {} Mar 12 16:03:45 crc kubenswrapper[4687]: users: Mar 12 16:03:45 crc kubenswrapper[4687]: - name: default-auth Mar 12 16:03:45 crc kubenswrapper[4687]: user: Mar 12 16:03:45 crc kubenswrapper[4687]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 12 16:03:45 crc kubenswrapper[4687]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 12 16:03:45 crc kubenswrapper[4687]: EOF Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqfdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nj99p_openshift-ovn-kubernetes(b3ee11e6-3caf-46f7-8321-84633755d718): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.028708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"ffa4d23099adab2b186d2ddcb8cf97050bd705ae6668cc2caa2fb80415b58004"} Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.029798 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.030088 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4ghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.031393 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 12 16:03:45 crc kubenswrapper[4687]: set -euo pipefail Mar 12 16:03:45 crc kubenswrapper[4687]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 12 16:03:45 crc kubenswrapper[4687]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 12 16:03:45 crc kubenswrapper[4687]: # As the secret mount is optional we must wait for the files to be present. Mar 12 16:03:45 crc kubenswrapper[4687]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 12 16:03:45 crc kubenswrapper[4687]: TS=$(date +%s) Mar 12 16:03:45 crc kubenswrapper[4687]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 12 16:03:45 crc kubenswrapper[4687]: HAS_LOGGED_INFO=0 Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: log_missing_certs(){ Mar 12 16:03:45 crc kubenswrapper[4687]: CUR_TS=$(date +%s) Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 12 16:03:45 crc kubenswrapper[4687]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 12 16:03:45 crc kubenswrapper[4687]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 12 16:03:45 crc kubenswrapper[4687]: HAS_LOGGED_INFO=1 Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: } Mar 12 16:03:45 crc kubenswrapper[4687]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 12 16:03:45 crc kubenswrapper[4687]: log_missing_certs Mar 12 16:03:45 crc kubenswrapper[4687]: sleep 5 Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 12 16:03:45 crc kubenswrapper[4687]: exec /usr/bin/kube-rbac-proxy \ Mar 12 16:03:45 crc kubenswrapper[4687]: --logtostderr \ Mar 12 16:03:45 crc kubenswrapper[4687]: --secure-listen-address=:9108 \ Mar 12 16:03:45 crc kubenswrapper[4687]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 12 16:03:45 crc kubenswrapper[4687]: --upstream=http://127.0.0.1:29108/ \ Mar 12 16:03:45 crc kubenswrapper[4687]: --tls-private-key-file=${TLS_PK} \ Mar 12 16:03:45 crc kubenswrapper[4687]: --tls-cert-file=${TLS_CERT} Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drz6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hj9kq_openshift-ovn-kubernetes(cc269c97-37cd-4773-ab83-9e47b2666fb4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.031845 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4ghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.032927 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.035178 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ -f "/env/_master" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: source "/env/_master" Mar 12 16:03:45 crc kubenswrapper[4687]: set +o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v4_join_subnet_opt= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v6_join_subnet_opt= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v4_transit_switch_subnet_opt= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v6_transit_switch_subnet_opt= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "" != "" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: dns_name_resolver_enabled_flag= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "false" == "true" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: persistent_ips_enabled_flag= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "true" == "true" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: # This is needed so that converting clusters from GA to TP Mar 12 16:03:45 crc kubenswrapper[4687]: # will rollout control plane pods as well Mar 12 16:03:45 crc kubenswrapper[4687]: network_segmentation_enabled_flag= Mar 12 16:03:45 crc kubenswrapper[4687]: multi_network_enabled_flag= Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "true" == "true" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: multi_network_enabled_flag="--enable-multi-network" Mar 12 16:03:45 crc kubenswrapper[4687]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 12 16:03:45 crc kubenswrapper[4687]: exec /usr/bin/ovnkube \ Mar 12 16:03:45 crc kubenswrapper[4687]: --enable-interconnect \ Mar 12 16:03:45 crc kubenswrapper[4687]: --init-cluster-manager "${K8S_NODE}" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 12 16:03:45 crc kubenswrapper[4687]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --metrics-bind-address "127.0.0.1:29108" \ Mar 12 16:03:45 crc kubenswrapper[4687]: --metrics-enable-pprof \ Mar 12 16:03:45 crc kubenswrapper[4687]: --metrics-enable-config-duration \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${ovn_v4_join_subnet_opt} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${ovn_v6_join_subnet_opt} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${dns_name_resolver_enabled_flag} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${persistent_ips_enabled_flag} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${multi_network_enabled_flag} \ Mar 12 16:03:45 crc kubenswrapper[4687]: ${network_segmentation_enabled_flag} Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drz6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hj9kq_openshift-ovn-kubernetes(cc269c97-37cd-4773-ab83-9e47b2666fb4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.035227 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-657xn" event={"ID":"349416ea-6932-403a-8e28-2ab01e85402c","Type":"ContainerStarted","Data":"0be8eba51f2dc292b964c16696306a660c451905269b6cb44f7319cd446706f5"} Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.036427 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" podUID="cc269c97-37cd-4773-ab83-9e47b2666fb4" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.036603 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 12 16:03:45 crc kubenswrapper[4687]: set -uo pipefail Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 12 16:03:45 crc kubenswrapper[4687]: HOSTS_FILE="/etc/hosts" Mar 12 16:03:45 crc kubenswrapper[4687]: TEMP_FILE="/etc/hosts.tmp" Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: # Make a temporary file with the old hosts file's attributes. Mar 12 16:03:45 crc kubenswrapper[4687]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 12 16:03:45 crc kubenswrapper[4687]: echo "Failed to preserve hosts file. Exiting." Mar 12 16:03:45 crc kubenswrapper[4687]: exit 1 Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: while true; do Mar 12 16:03:45 crc kubenswrapper[4687]: declare -A svc_ips Mar 12 16:03:45 crc kubenswrapper[4687]: for svc in "${services[@]}"; do Mar 12 16:03:45 crc kubenswrapper[4687]: # Fetch service IP from cluster dns if present. We make several tries Mar 12 16:03:45 crc kubenswrapper[4687]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 12 16:03:45 crc kubenswrapper[4687]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 12 16:03:45 crc kubenswrapper[4687]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 12 16:03:45 crc kubenswrapper[4687]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 12 16:03:45 crc kubenswrapper[4687]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 12 16:03:45 crc kubenswrapper[4687]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 12 16:03:45 crc kubenswrapper[4687]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 12 16:03:45 crc kubenswrapper[4687]: for i in ${!cmds[*]} Mar 12 16:03:45 crc kubenswrapper[4687]: do Mar 12 16:03:45 crc kubenswrapper[4687]: ips=($(eval "${cmds[i]}")) Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: svc_ips["${svc}"]="${ips[@]}" Mar 12 16:03:45 crc kubenswrapper[4687]: break Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: # Update /etc/hosts only if we get valid service IPs Mar 12 16:03:45 crc kubenswrapper[4687]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 12 16:03:45 crc kubenswrapper[4687]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 12 16:03:45 crc kubenswrapper[4687]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 12 16:03:45 crc kubenswrapper[4687]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 12 16:03:45 crc kubenswrapper[4687]: sleep 60 & wait Mar 12 16:03:45 crc kubenswrapper[4687]: continue Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: # Append resolver entries for services Mar 12 16:03:45 crc kubenswrapper[4687]: rc=0 Mar 12 16:03:45 crc kubenswrapper[4687]: for svc in "${!svc_ips[@]}"; do Mar 12 16:03:45 crc kubenswrapper[4687]: for ip in ${svc_ips[${svc}]}; do Mar 12 16:03:45 crc kubenswrapper[4687]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ $rc -ne 0 ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: sleep 60 & wait Mar 12 16:03:45 crc kubenswrapper[4687]: continue Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: Mar 12 16:03:45 crc kubenswrapper[4687]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 12 16:03:45 crc kubenswrapper[4687]: # Replace /etc/hosts with our modified version if needed Mar 12 16:03:45 crc kubenswrapper[4687]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 12 16:03:45 crc kubenswrapper[4687]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: sleep 60 & wait Mar 12 16:03:45 crc kubenswrapper[4687]: unset svc_ips Mar 12 16:03:45 crc kubenswrapper[4687]: done Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxr8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-657xn_openshift-dns(349416ea-6932-403a-8e28-2ab01e85402c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.037239 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e4f76f76d5ee1f59e55d11eb80e08260f82cbbf756d3f6e847b5bbc54e8af104"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.037555 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.037724 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-657xn" podUID="349416ea-6932-403a-8e28-2ab01e85402c" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.038322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"65b49a5f6483cfb77be5bfa31fbc3a8dfabf18aa71a10f5ccc6658e5af0b9431"} Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.038500 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:03:45 crc kubenswrapper[4687]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 12 16:03:45 crc kubenswrapper[4687]: set -o allexport Mar 12 16:03:45 crc kubenswrapper[4687]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 12 16:03:45 crc kubenswrapper[4687]: source /etc/kubernetes/apiserver-url.env Mar 12 16:03:45 crc kubenswrapper[4687]: else Mar 12 16:03:45 crc kubenswrapper[4687]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 12 16:03:45 crc kubenswrapper[4687]: exit 1 Mar 12 16:03:45 crc kubenswrapper[4687]: fi Mar 12 16:03:45 crc kubenswrapper[4687]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 12 16:03:45 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 12 16:03:45 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.039584 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.039728 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.039832 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.040212 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.041905 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.044742 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.044814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.044823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.044836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.044845 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.048904 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.060919 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.069781 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.079025 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.088613 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.096802 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.117718 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.128251 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.144140 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.147289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.147338 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.147397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.147423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.147437 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.163656 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.178522 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.187939 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.196973 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.211905 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.224261 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.232221 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.241152 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.249960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.250155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.250305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.250446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.250554 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.253004 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.263157 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.273516 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.283438 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.293120 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.304985 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.311829 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.318574 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.328828 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.338911 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.353259 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.353293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.353301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.353314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.353325 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.357069 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.378420 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.378633 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:03:47.378607463 +0000 UTC m=+76.342569817 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.455468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.455870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.456017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.456181 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.456319 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.480118 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.480165 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.480192 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.480212 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.480235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480294 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480312 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480336 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480376 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480316 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480430 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:47.480378133 +0000 UTC m=+76.444340497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480467 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs podName:bb883751-bda8-4227-99fe-74d0b85cff17 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:47.480451865 +0000 UTC m=+76.444414289 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs") pod "network-metrics-daemon-d4g6l" (UID: "bb883751-bda8-4227-99fe-74d0b85cff17") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480430 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480485 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:47.480478325 +0000 UTC m=+76.444440689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480509 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:47.480498606 +0000 UTC m=+76.444460950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480657 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480728 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480756 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.480912 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:47.480868696 +0000 UTC m=+76.444831070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.559916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.560334 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.560644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.560956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.561177 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.664909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.665010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.665028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.665053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.665071 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.732057 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.732097 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.732257 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.732295 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.732502 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.732697 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.732835 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:45 crc kubenswrapper[4687]: E0312 16:03:45.733015 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.740564 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.742469 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.745124 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.746802 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.749638 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.751347 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.753016 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.757166 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.759468 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.760896 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.761947 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.764277 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.765340 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.767735 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.767931 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.767996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.768017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.768043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.768061 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.768939 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.770955 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.772330 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.773241 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.775187 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.776512 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.777719 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.779669 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.780577 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.783272 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.784152 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.786333 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.787996 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.788981 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.790963 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.791945 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.793864 4687 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.794066 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.797567 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.800300 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.802088 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.805870 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.808287 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.809585 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.811686 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.813631 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.814696 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.816678 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.818780 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.820116 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.822063 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.823200 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.824993 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.826626 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.827298 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.828102 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.829230 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.829935 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.831415 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.832042 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.832637 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.870531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.870585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.870610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.870637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.870658 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.973938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.973986 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.974004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.974031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:45 crc kubenswrapper[4687]: I0312 16:03:45.974050 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:45Z","lastTransitionTime":"2026-03-12T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.077180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.077214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.077224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.077271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.077285 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.181055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.181127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.181154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.181188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.181212 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.284533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.284591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.284616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.284645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.284667 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.387963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.388013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.388024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.388042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.388054 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.490771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.490822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.490833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.490850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.490860 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.592797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.592838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.592857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.592874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.592883 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.695337 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.695451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.695511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.695548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.695605 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.797574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.797617 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.797628 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.797645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.797657 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.900167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.900256 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.900292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.900418 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:46 crc kubenswrapper[4687]: I0312 16:03:46.900443 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:46Z","lastTransitionTime":"2026-03-12T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.003147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.003201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.003217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.003240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.003256 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.105498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.105546 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.105563 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.105586 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.105602 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.208182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.208436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.208581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.208673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.208782 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.311522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.311574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.311592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.311626 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.311648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.415404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.415796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.416066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.416402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.416637 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.430987 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.431317 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:03:51.431291481 +0000 UTC m=+80.395253855 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.520650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.521001 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.521515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.521584 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.521609 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.531832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.532141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.532424 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.532717 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.533052 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.531988 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.533550 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.532332 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.532656 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.532984 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.533275 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.533854 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.534071 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:51.534045737 +0000 UTC m=+80.498008111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.534692 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.535556 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.535383 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:51.535372442 +0000 UTC m=+80.499334786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.536202 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs podName:bb883751-bda8-4227-99fe-74d0b85cff17 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:51.536173174 +0000 UTC m=+80.500135548 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs") pod "network-metrics-daemon-d4g6l" (UID: "bb883751-bda8-4227-99fe-74d0b85cff17") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.536959 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:51.536933235 +0000 UTC m=+80.500895609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.537162 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:51.537145191 +0000 UTC m=+80.501107565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.625544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.625596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.625613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.625634 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.625649 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.728608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.728674 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.728694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.728717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.728734 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.732132 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.732139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.732213 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.732222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.732437 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.732560 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.732653 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:47 crc kubenswrapper[4687]: E0312 16:03:47.732807 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.831845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.831905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.831926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.831951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.831969 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.935279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.935340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.935387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.935412 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:47 crc kubenswrapper[4687]: I0312 16:03:47.935430 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:47Z","lastTransitionTime":"2026-03-12T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.038474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.038543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.038559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.038581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.038599 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.141135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.141206 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.141229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.141255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.141272 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.244828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.244906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.244923 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.244945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.244962 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.347175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.347245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.347264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.347288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.347305 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.450422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.450456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.450466 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.450482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.450493 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.553336 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.553872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.554032 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.554211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.554352 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.657592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.657631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.657642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.657657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.657669 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.760759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.760831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.760850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.760907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.760926 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.863702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.863740 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.863750 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.863763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.863772 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.966639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.966718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.966738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.966764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:48 crc kubenswrapper[4687]: I0312 16:03:48.966785 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:48Z","lastTransitionTime":"2026-03-12T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.069765 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.069846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.069872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.069904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.069926 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.172716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.172756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.172767 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.172782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.172793 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.274666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.274712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.274729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.274754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.274774 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.377318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.377378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.377392 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.377409 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.377423 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.479744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.479790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.479805 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.479825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.479847 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.582222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.582273 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.582291 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.582315 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.582332 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.685764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.685828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.685850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.685882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.685906 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.732477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.732537 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:49 crc kubenswrapper[4687]: E0312 16:03:49.732642 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:49 crc kubenswrapper[4687]: E0312 16:03:49.732791 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.732559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.732871 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:49 crc kubenswrapper[4687]: E0312 16:03:49.732959 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:49 crc kubenswrapper[4687]: E0312 16:03:49.733063 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.788693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.788752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.788768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.788791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.788808 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.891207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.891260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.891275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.891296 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.891311 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.994233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.994274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.994285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.994301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:49 crc kubenswrapper[4687]: I0312 16:03:49.994313 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:49Z","lastTransitionTime":"2026-03-12T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.096941 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.097000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.097019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.097046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.097071 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.199419 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.199474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.199491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.199516 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.199533 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.302700 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.302764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.302782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.302806 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.302824 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.405852 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.405920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.405938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.405964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.405982 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.508835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.508910 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.508929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.508956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.508973 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.612255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.612404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.612525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.612575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.612595 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.715331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.715441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.715460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.715489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.715511 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.819816 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.820109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.820117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.820132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.820141 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.923688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.923764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.923786 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.923812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:50 crc kubenswrapper[4687]: I0312 16:03:50.923828 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:50Z","lastTransitionTime":"2026-03-12T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.026629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.026699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.026718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.026744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.026762 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.131810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.131907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.131925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.131993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.132012 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.234476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.234512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.234523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.234540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.234549 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.337668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.337744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.337762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.337786 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.337804 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.440531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.440584 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.440601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.440624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.440641 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.473674 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.474029 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:03:59.473982425 +0000 UTC m=+88.437944809 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.543578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.543629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.543640 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.543659 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.543671 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.574954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.575006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.575038 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.575073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.575100 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575183 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575243 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:59.575224411 +0000 UTC m=+88.539186765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575465 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575475 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575480 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575548 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575571 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575481 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575653 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575667 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575523 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs podName:bb883751-bda8-4227-99fe-74d0b85cff17 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:59.575508569 +0000 UTC m=+88.539470933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs") pod "network-metrics-daemon-d4g6l" (UID: "bb883751-bda8-4227-99fe-74d0b85cff17") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575712 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:59.575683903 +0000 UTC m=+88.539646267 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575734 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:59.575723924 +0000 UTC m=+88.539686278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.575759 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 16:03:59.575751285 +0000 UTC m=+88.539713649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.578235 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.578276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.578288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.578305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.578318 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.593540 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.597828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.597865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.597876 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.597893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.597906 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.608868 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.612974 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.613042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.613062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.613090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.613108 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.630065 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.634243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.634279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.634291 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.634312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.634325 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.647748 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.652650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.652706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.652719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.652743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.652757 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.667661 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8c6bece5-19ef-4b03-9eea-64676974d44f\\\",\\\"systemUUID\\\":\\\"4881b7f2-3483-4763-9575-0355a3ee692e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.667943 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.669763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.669819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.669833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.669853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.669867 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.732916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.733067 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.733247 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.733214 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.733293 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.733571 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.733720 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:51 crc kubenswrapper[4687]: E0312 16:03:51.733958 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.749632 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.765222 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.777055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.777200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.777211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.777228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.777276 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.789680 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.804627 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.816756 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.836716 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.852426 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.878465 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.880036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.880076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.880087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.880105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.880118 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.888609 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.901543 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.910140 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.919737 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.932482 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.943897 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.954207 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.963493 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.983093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.983165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.983222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.983237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:51 crc kubenswrapper[4687]: I0312 16:03:51.983247 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:51Z","lastTransitionTime":"2026-03-12T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.086482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.086549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.086568 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.086596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.086649 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.188888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.188961 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.188994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.189024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.189046 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.292575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.292622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.292636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.292658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.292672 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.395684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.395769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.395797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.395846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.395873 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.498645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.498724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.498743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.498760 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.498774 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.601537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.601588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.601601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.601619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.601632 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.704074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.704137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.704152 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.704176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.704194 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.719305 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.806988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.807083 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.807103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.807132 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.807149 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.910576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.910644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.910660 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.910688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:52 crc kubenswrapper[4687]: I0312 16:03:52.910712 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:52Z","lastTransitionTime":"2026-03-12T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.013803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.013857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.013866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.013882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.013892 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.116550 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.116631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.116658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.116690 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.116713 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.219694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.219781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.219800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.219826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.219843 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.322858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.322920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.322935 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.322956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.322970 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.425614 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.425680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.425698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.425724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.425747 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.527882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.527939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.527956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.527979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.527996 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.630606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.630655 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.630668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.630688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.630702 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.732045 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.732163 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.732175 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.732100 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:53 crc kubenswrapper[4687]: E0312 16:03:53.732396 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:53 crc kubenswrapper[4687]: E0312 16:03:53.732466 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:53 crc kubenswrapper[4687]: E0312 16:03:53.732588 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:53 crc kubenswrapper[4687]: E0312 16:03:53.732714 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.733415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.733459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.733475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.733494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.733509 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.836116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.836172 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.836185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.836208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.836221 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.939709 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.939761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.939772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.939788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:53 crc kubenswrapper[4687]: I0312 16:03:53.939799 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:53Z","lastTransitionTime":"2026-03-12T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.042269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.042319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.042330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.042345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.042356 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.146257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.146321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.146344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.146411 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.146438 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.249287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.249350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.249431 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.249462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.249485 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.352702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.352778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.352811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.352843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.352866 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.456811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.456869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.456886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.456907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.456924 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.560272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.560342 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.560457 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.560491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.560512 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.663297 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.663534 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.663560 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.663589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.663610 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.766298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.766395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.766420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.766447 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.766469 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.869256 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.869298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.869306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.869322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.869332 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.972030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.972101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.972126 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.972189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:54 crc kubenswrapper[4687]: I0312 16:03:54.972213 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:54Z","lastTransitionTime":"2026-03-12T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.075067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.075124 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.075137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.075154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.075165 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.177782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.177821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.177829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.177845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.177855 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.279841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.279872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.279880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.279896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.279906 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.381902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.381938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.381946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.381959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.381968 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.483982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.484019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.484030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.484045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.484056 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.590834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.590865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.590875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.590890 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.590899 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.692927 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.692982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.693000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.693018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.693029 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.732646 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.732741 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:55 crc kubenswrapper[4687]: E0312 16:03:55.732847 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.733019 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.733479 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:55 crc kubenswrapper[4687]: E0312 16:03:55.733549 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:55 crc kubenswrapper[4687]: E0312 16:03:55.733603 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:55 crc kubenswrapper[4687]: E0312 16:03:55.733730 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.796012 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.796038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.796046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.796059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.796067 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.897860 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.897885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.897892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.897905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:55 crc kubenswrapper[4687]: I0312 16:03:55.897915 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:55Z","lastTransitionTime":"2026-03-12T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.000459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.000497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.000509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.000526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.000539 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.073166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.073228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.075678 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerStarted","Data":"f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.089315 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.102074 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.103462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.103525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.103548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.103575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.103599 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.120903 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.134445 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.145633 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.156919 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.169533 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.193103 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.200019 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.205595 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.205631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.205641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.205655 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.205664 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.208192 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.216450 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.223739 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.236752 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.245568 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.255088 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.263881 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.307702 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.310089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.310165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.310189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.310217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.310718 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.316881 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.337869 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.348555 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.363191 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.374564 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.391335 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.400028 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.408643 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.414310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.414347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.414379 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.414400 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.414414 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.419418 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.430502 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.440757 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.450111 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.457908 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.465527 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.471886 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.517303 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.517381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.517400 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.517422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.517437 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.619667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.619716 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.619734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.619755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.619769 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.722062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.722104 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.722113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.722127 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.722136 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.824191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.824238 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.824249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.824264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.824275 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.926421 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.926472 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.926488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.926506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:56 crc kubenswrapper[4687]: I0312 16:03:56.926519 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:56Z","lastTransitionTime":"2026-03-12T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.029976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.030439 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.030465 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.030496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.030519 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.080430 4687 generic.go:334] "Generic (PLEG): container finished" podID="10b18da7-d236-4556-a9ef-7d582b3ed224" containerID="f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a" exitCode=0 Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.080481 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerDied","Data":"f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.098734 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.118067 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.133221 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.133319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.133343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.133395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.133412 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.134022 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.143879 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.152700 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.166906 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.178229 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.194405 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.203226 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.219330 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.236121 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.236153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.236164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.236179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.236190 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.243000 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.253953 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.263401 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.272478 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.285383 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.296503 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.338343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.338430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.338449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.338470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.338483 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.441301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.441381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.441398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.441415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.441480 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.544698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.544745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.544759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.544774 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.544783 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.647043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.647085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.647096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.647114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.647127 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.732430 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.732451 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:03:57 crc kubenswrapper[4687]: E0312 16:03:57.733051 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.732572 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.732457 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:03:57 crc kubenswrapper[4687]: E0312 16:03:57.733114 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:03:57 crc kubenswrapper[4687]: E0312 16:03:57.733160 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:03:57 crc kubenswrapper[4687]: E0312 16:03:57.732993 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.749968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.750026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.750046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.750073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.750094 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.855647 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.855681 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.855690 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.855705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.855714 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.957993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.958049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.958061 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.958081 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:57 crc kubenswrapper[4687]: I0312 16:03:57.958094 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:57Z","lastTransitionTime":"2026-03-12T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.060046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.060407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.060422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.060440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.060451 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.084989 4687 generic.go:334] "Generic (PLEG): container finished" podID="10b18da7-d236-4556-a9ef-7d582b3ed224" containerID="a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813" exitCode=0 Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.085094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerDied","Data":"a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.086414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9k44w" event={"ID":"bc5d5523-b52d-4739-9a22-3abb886d7f0d","Type":"ContainerStarted","Data":"c04597142928391fc8bc391ee9d35898524a2f8a0fe45472693db2975801d784"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.089019 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"36477e3cc568946e67ce180e38cd52551657b825dfa1ad9ca6d0a48562443a41"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.091088 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" exitCode=0 Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.091120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.099242 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.111169 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.123603 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.134665 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.143107 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.152625 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.164136 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.164167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.164177 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.164193 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.164204 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.168455 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.189491 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.198717 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.208970 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.217177 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.225810 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.236934 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.247318 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.257313 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.266333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.266374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.266386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.266401 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.266410 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.267142 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.278206 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.290809 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.305870 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.318233 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.329653 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04597142928391fc8bc391ee9d35898524a2f8a0fe45472693db2975801d784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.344654 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.355442 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.364582 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.368217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.368267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.368280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.368297 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.368309 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.375171 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.391545 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.406767 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.419225 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36477e3cc568946e67ce180e38cd52551657b825dfa1ad9ca6d0a48562443a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.439730 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.449150 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.456817 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.464430 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.470749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.470789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.470802 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.470818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.470830 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.573766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.574105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.574114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.574130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.574139 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.676435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.676475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.676487 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.676504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.676516 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.733110 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:03:58 crc kubenswrapper[4687]: E0312 16:03:58.733292 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.778590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.778637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.778649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.778670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.778681 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.882287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.882344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.882367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.882416 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.882433 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.984940 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.984985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.984994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.985007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:58 crc kubenswrapper[4687]: I0312 16:03:58.985016 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:58Z","lastTransitionTime":"2026-03-12T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.087993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.088033 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.088043 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.088058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.088068 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:59Z","lastTransitionTime":"2026-03-12T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.098486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.098530 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.098546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.098557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.098569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.098581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.100826 4687 generic.go:334] "Generic (PLEG): container finished" podID="10b18da7-d236-4556-a9ef-7d582b3ed224" containerID="ffd3aa8542a0e2bbba362f529ee7a2c733a28bb22cdbb4afbd91447249c679d4" exitCode=0 Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.100869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerDied","Data":"ffd3aa8542a0e2bbba362f529ee7a2c733a28bb22cdbb4afbd91447249c679d4"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.123601 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-657xn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349416ea-6932-403a-8e28-2ab01e85402c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxr8w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-657xn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.142681 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785ed51-b59b-4ec7-b31c-a66279b9151c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32b208df7b86c149160d13b29230630bc1817b99168268a715575b4eda5c4494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4ghw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bxjh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.151615 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4e92d37-42e0-4d7c-8086-72828fa10e55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42cc148f915f1b0fe4a827bc809510a0f6c86a4693458cda42e4792dad111df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950755e56b5fe2f4654105ce0d7c5fe6775cd6e8fe87cbf2fae49fc66cfe717e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.163681 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.178624 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36477e3cc568946e67ce180e38cd52551657b825dfa1ad9ca6d0a48562443a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.193899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.193938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.193952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.193968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.193980 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:59Z","lastTransitionTime":"2026-03-12T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.193887 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3ee11e6-3caf-46f7-8321-84633755d718\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqfdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nj99p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.208248 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.218099 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9k44w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc5d5523-b52d-4739-9a22-3abb886d7f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c04597142928391fc8bc391ee9d35898524a2f8a0fe45472693db2975801d784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww5zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9k44w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.232025 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dt4jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7c33249-8ce6-49e4-a8a1-91bffa192e26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bv6jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dt4jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.239816 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc269c97-37cd-4773-ab83-9e47b2666fb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drz6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hj9kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.251198 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:02:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T16:03:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0312 16:03:35.263739 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 16:03:35.263858 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 16:03:35.267068 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3819658977/tls.crt::/tmp/serving-cert-3819658977/tls.key\\\\\\\"\\\\nI0312 16:03:35.451875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 16:03:35.453312 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 16:03:35.453382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 16:03:35.453430 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 16:03:35.453457 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 16:03:35.456575 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 16:03:35.456601 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 16:03:35.456611 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 16:03:35.456614 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 16:03:35.456616 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 16:03:35.456619 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 16:03:35.456645 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 16:03:35.458288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T16:02:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:02:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:02:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.262465 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.271109 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.280649 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.288947 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.297561 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.297593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.297603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.297616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.297625 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:59Z","lastTransitionTime":"2026-03-12T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.304047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd3aa8542a0e2bbba362f529ee7a2c733a28bb22cdbb4afbd91447249c679d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd3aa8542a0e2bbba362f529ee7a2c733a28bb22cdbb4afbd91447249c679d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.400073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.400109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.400118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.400134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:03:59 crc kubenswrapper[4687]: I0312 16:03:59.400145 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:03:59Z","lastTransitionTime":"2026-03-12T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.046663 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.046906 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.047056 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.047093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.047381 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.047324749 +0000 UTC m=+105.011287093 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.047490 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.047507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.047655 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.048424 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.048499 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.063489 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.063601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.063694 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.063776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.063863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.064106 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.064214 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.064184985 +0000 UTC m=+105.028147369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065255 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065307 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065335 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065459 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.065434958 +0000 UTC m=+105.029397342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065553 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065618 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.065599683 +0000 UTC m=+105.029562057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065740 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065811 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs podName:bb883751-bda8-4227-99fe-74d0b85cff17 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.065789938 +0000 UTC m=+105.029752322 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs") pod "network-metrics-daemon-d4g6l" (UID: "bb883751-bda8-4227-99fe-74d0b85cff17") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065950 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.065983 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.066002 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:04:00 crc kubenswrapper[4687]: E0312 16:04:00.066072 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.066051705 +0000 UTC m=+105.030014089 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.073379 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.073841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.074028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.074228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.074449 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.087916 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.107447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerStarted","Data":"d6f20b3e6aa621a3187916431507f7f97f28d3d95859be60ff401ba50edbc57a"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.191280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.191691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.191702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.191719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.191729 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.294193 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.294225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.294233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.294245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.294254 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.397058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.397084 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.397092 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.397107 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.397116 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.499015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.499051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.499063 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.499079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.499089 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.600783 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.600815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.600823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.600837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.600846 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.702638 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.702675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.702685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.702701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.702712 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.805149 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.805176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.805185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.805197 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.805205 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.861480 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.907508 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.907841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.907850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.907863 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:00 crc kubenswrapper[4687]: I0312 16:04:00.907875 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:00Z","lastTransitionTime":"2026-03-12T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.010393 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.010429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.010440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.010456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.010466 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.112234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.112272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.112281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.112296 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.112309 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.114105 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.115643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" event={"ID":"cc269c97-37cd-4773-ab83-9e47b2666fb4","Type":"ContainerStarted","Data":"a9a3e7c46df28b5c30973d7155e1fcce7912bbbfacb239b1d53948dfcf684ab0"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.115670 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" event={"ID":"cc269c97-37cd-4773-ab83-9e47b2666fb4","Type":"ContainerStarted","Data":"4c029991a1fec3cf624f6249cbc7d26c81eb294299be5baacead1471b220a5b1"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.117102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dt4jw" event={"ID":"c7c33249-8ce6-49e4-a8a1-91bffa192e26","Type":"ContainerStarted","Data":"37f30df79a34baac7166a0e7ba2c10ea54afd37f1ec8ad8e479441fc0ababd4d"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.119286 4687 generic.go:334] "Generic (PLEG): container finished" podID="10b18da7-d236-4556-a9ef-7d582b3ed224" containerID="d6f20b3e6aa621a3187916431507f7f97f28d3d95859be60ff401ba50edbc57a" exitCode=0 Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.119349 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerDied","Data":"d6f20b3e6aa621a3187916431507f7f97f28d3d95859be60ff401ba50edbc57a"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.125286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"111427d85befc136029143c5c34d1a043d82475496d2ee769c70094467b1eb57"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.125327 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c6d6fe66181190215f079d4e85aba218dc497af81b6cfecbd1382d3a376548a1"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.126889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-657xn" event={"ID":"349416ea-6932-403a-8e28-2ab01e85402c","Type":"ContainerStarted","Data":"885931e2bb826aa5c52a7c6af813d907ac341f7b4efdf1479caa94628e84fb43"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.128803 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0495b6a5b4dec43fe9dfeed931bb7f22136acd55ebbe2809a3c0814396d38d20"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.133841 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:04:01Z is after 2025-08-24T17:21:41Z" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.161618 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb883751-bda8-4227-99fe-74d0b85cff17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4bbzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d4g6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:04:01Z is after 2025-08-24T17:21:41Z" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.175750 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10b18da7-d236-4556-a9ef-7d582b3ed224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9a24eaf25cb41880e3b95c343102830da9bb7cee991a0275ce385b52de8f13a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2df9b59cfc7e20220790c75e49cd22bd42e529476daa185b05843e40aee6813\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd3aa8542a0e2bbba362f529ee7a2c733a28bb22cdbb4afbd91447249c679d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffd3aa8542a0e2bbba362f529ee7a2c733a28bb22cdbb4afbd91447249c679d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T16:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T16:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66cdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T16:03:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnhsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:04:01Z is after 2025-08-24T17:21:41Z" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.190336 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T16:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T16:04:01Z is after 2025-08-24T17:21:41Z" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.216606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.216655 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.216668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.216684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.216695 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.274863 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podStartSLOduration=55.274846594 podStartE2EDuration="55.274846594s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.274819543 +0000 UTC m=+90.238781887" watchObservedRunningTime="2026-03-12 16:04:01.274846594 +0000 UTC m=+90.238808938" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.302342 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.302326147 podStartE2EDuration="16.302326147s" podCreationTimestamp="2026-03-12 16:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.301495164 +0000 UTC m=+90.265457498" watchObservedRunningTime="2026-03-12 16:04:01.302326147 +0000 UTC m=+90.266288491" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.318690 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.318723 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.318732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.318746 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.318755 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.370319 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9k44w" podStartSLOduration=55.370301453 podStartE2EDuration="55.370301453s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.37020082 +0000 UTC m=+90.334163164" watchObservedRunningTime="2026-03-12 16:04:01.370301453 +0000 UTC m=+90.334263797" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.395759 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hj9kq" podStartSLOduration=54.395743891 podStartE2EDuration="54.395743891s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.39570479 +0000 UTC m=+90.359667134" watchObservedRunningTime="2026-03-12 16:04:01.395743891 +0000 UTC m=+90.359706235" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.421374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.421412 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.421420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.421436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.421447 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.425079 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.425063993 podStartE2EDuration="1.425063993s" podCreationTimestamp="2026-03-12 16:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.424394475 +0000 UTC m=+90.388356819" watchObservedRunningTime="2026-03-12 16:04:01.425063993 +0000 UTC m=+90.389026347" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.451837 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-657xn" podStartSLOduration=55.451823076 podStartE2EDuration="55.451823076s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.451440995 +0000 UTC m=+90.415403339" watchObservedRunningTime="2026-03-12 16:04:01.451823076 +0000 UTC m=+90.415785420" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.462374 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dt4jw" podStartSLOduration=55.46234816 podStartE2EDuration="55.46234816s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:01.461663742 +0000 UTC m=+90.425626086" watchObservedRunningTime="2026-03-12 16:04:01.46234816 +0000 UTC m=+90.426310514" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.524484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.524523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.524532 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.524546 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.524557 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.626788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.626819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.626828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.626841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.626850 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.728801 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.728826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.728833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.728845 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.728853 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.734909 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.734985 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.735052 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:01 crc kubenswrapper[4687]: E0312 16:04:01.735412 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.735437 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:01 crc kubenswrapper[4687]: E0312 16:04:01.735567 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:04:01 crc kubenswrapper[4687]: E0312 16:04:01.735583 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:04:01 crc kubenswrapper[4687]: E0312 16:04:01.735652 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.831046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.831093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.831103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.831116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.831124 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.832075 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.832117 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.832130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.832142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.832152 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T16:04:01Z","lastTransitionTime":"2026-03-12T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.874987 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl"] Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.875676 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.878176 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.878704 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.879222 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.879260 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.898340 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/628bf5a8-56da-4952-997f-5ad0396e09fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.898407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/628bf5a8-56da-4952-997f-5ad0396e09fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.898428 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/628bf5a8-56da-4952-997f-5ad0396e09fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.898465 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/628bf5a8-56da-4952-997f-5ad0396e09fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.898484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/628bf5a8-56da-4952-997f-5ad0396e09fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.945583 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/628bf5a8-56da-4952-997f-5ad0396e09fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999195 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/628bf5a8-56da-4952-997f-5ad0396e09fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/628bf5a8-56da-4952-997f-5ad0396e09fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/628bf5a8-56da-4952-997f-5ad0396e09fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/628bf5a8-56da-4952-997f-5ad0396e09fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/628bf5a8-56da-4952-997f-5ad0396e09fb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:01 crc kubenswrapper[4687]: I0312 16:04:01.999411 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/628bf5a8-56da-4952-997f-5ad0396e09fb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.000275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/628bf5a8-56da-4952-997f-5ad0396e09fb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.004636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/628bf5a8-56da-4952-997f-5ad0396e09fb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.014428 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/628bf5a8-56da-4952-997f-5ad0396e09fb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6fpxl\" (UID: \"628bf5a8-56da-4952-997f-5ad0396e09fb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.047731 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.057903 4687 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.134110 4687 generic.go:334] "Generic (PLEG): container finished" podID="10b18da7-d236-4556-a9ef-7d582b3ed224" containerID="ac6731444e4184ae204b52556f248dd5abd8eb46760204a0a9737c4e519a70ed" exitCode=0 Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.134151 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerDied","Data":"ac6731444e4184ae204b52556f248dd5abd8eb46760204a0a9737c4e519a70ed"} Mar 12 16:04:02 crc kubenswrapper[4687]: I0312 16:04:02.271333 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" Mar 12 16:04:02 crc kubenswrapper[4687]: W0312 16:04:02.283985 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod628bf5a8_56da_4952_997f_5ad0396e09fb.slice/crio-247542d16402979a32ca7ee46a9dcc41effeb8e597535463e62ef36dfbe775ed WatchSource:0}: Error finding container 247542d16402979a32ca7ee46a9dcc41effeb8e597535463e62ef36dfbe775ed: Status 404 returned error can't find the container with id 247542d16402979a32ca7ee46a9dcc41effeb8e597535463e62ef36dfbe775ed Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.140955 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerStarted","Data":"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9"} Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.141404 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.141423 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.144844 4687 generic.go:334] "Generic (PLEG): container finished" podID="10b18da7-d236-4556-a9ef-7d582b3ed224" containerID="ae9e092a7f1906203ee5604dc41e1620413dcd2b96adaf1be9a7c71bbf530c83" exitCode=0 Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.144917 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerDied","Data":"ae9e092a7f1906203ee5604dc41e1620413dcd2b96adaf1be9a7c71bbf530c83"} Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.146149 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" event={"ID":"628bf5a8-56da-4952-997f-5ad0396e09fb","Type":"ContainerStarted","Data":"4b53ffcb54c901f265b04c83c789fe613fe8c158120c651a041d5a69d5646284"} Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.146184 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" event={"ID":"628bf5a8-56da-4952-997f-5ad0396e09fb","Type":"ContainerStarted","Data":"247542d16402979a32ca7ee46a9dcc41effeb8e597535463e62ef36dfbe775ed"} Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.176697 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podStartSLOduration=57.176673089 podStartE2EDuration="57.176673089s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:03.175296331 +0000 UTC m=+92.139258675" watchObservedRunningTime="2026-03-12 16:04:03.176673089 +0000 UTC m=+92.140635443" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.178272 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.207057 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6fpxl" podStartSLOduration=57.207036818 podStartE2EDuration="57.207036818s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:03.206056932 +0000 UTC m=+92.170019286" watchObservedRunningTime="2026-03-12 16:04:03.207036818 +0000 UTC m=+92.170999172" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.733026 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.733084 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.733040 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:03 crc kubenswrapper[4687]: I0312 16:04:03.733230 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:03 crc kubenswrapper[4687]: E0312 16:04:03.733184 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:04:03 crc kubenswrapper[4687]: E0312 16:04:03.733310 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:04:03 crc kubenswrapper[4687]: E0312 16:04:03.733427 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:04:03 crc kubenswrapper[4687]: E0312 16:04:03.733503 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:04:04 crc kubenswrapper[4687]: I0312 16:04:04.098107 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:04:04 crc kubenswrapper[4687]: I0312 16:04:04.155008 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:04:04 crc kubenswrapper[4687]: I0312 16:04:04.155952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" event={"ID":"10b18da7-d236-4556-a9ef-7d582b3ed224","Type":"ContainerStarted","Data":"0c6d54d8147f399183b89e40cdd743f445b27118c866cf8431aa1edbdc8229a1"} Mar 12 16:04:05 crc kubenswrapper[4687]: I0312 16:04:05.315166 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gnhsw" podStartSLOduration=59.315146526 podStartE2EDuration="59.315146526s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:04.218293631 +0000 UTC m=+93.182255975" watchObservedRunningTime="2026-03-12 16:04:05.315146526 +0000 UTC m=+94.279108870" Mar 12 16:04:05 crc kubenswrapper[4687]: I0312 16:04:05.316257 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-d4g6l"] Mar 12 16:04:05 crc kubenswrapper[4687]: I0312 16:04:05.316392 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:05 crc kubenswrapper[4687]: E0312 16:04:05.316495 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:04:05 crc kubenswrapper[4687]: I0312 16:04:05.732084 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:05 crc kubenswrapper[4687]: I0312 16:04:05.732145 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:05 crc kubenswrapper[4687]: E0312 16:04:05.732547 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 16:04:05 crc kubenswrapper[4687]: I0312 16:04:05.732175 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:05 crc kubenswrapper[4687]: E0312 16:04:05.732670 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 16:04:05 crc kubenswrapper[4687]: E0312 16:04:05.732806 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 16:04:06 crc kubenswrapper[4687]: I0312 16:04:06.732228 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:06 crc kubenswrapper[4687]: E0312 16:04:06.732400 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d4g6l" podUID="bb883751-bda8-4227-99fe-74d0b85cff17" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.584228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.584310 4687 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.615236 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g4lgw"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.615841 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fzgfd"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.616163 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.616408 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5hwpl"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.616556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.616769 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.617892 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.618133 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.619305 4687 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.619338 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.619675 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.621869 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb"] Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.622984 4687 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623026 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623078 4687 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623092 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623338 4687 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623360 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.623438 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623457 4687 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623519 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623467 4687 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623559 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623675 4687 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623700 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623747 4687 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623772 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623834 4687 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623858 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.623887 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623898 4687 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623914 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.623956 4687 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.623974 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.624193 4687 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.624291 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.625455 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.625646 4687 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.625679 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.625747 4687 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.625772 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.627472 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.627690 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.628038 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q8mpl"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.628319 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.628323 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.628496 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.628714 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.629000 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hfzwb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.629344 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.629445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.629578 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.630073 4687 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.630109 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.630125 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.630231 4687 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.630257 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.630317 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.630405 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.630440 4687 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.630559 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.630701 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.630786 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.631620 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.632094 4687 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.632132 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: W0312 16:04:07.632668 4687 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Mar 12 16:04:07 crc kubenswrapper[4687]: E0312 16:04:07.632697 4687 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.632727 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-crmcv"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.633261 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.633953 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-l4j4z"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.634338 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.634755 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sqhmx"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.648612 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.648652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.648864 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.649332 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.649464 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.649544 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.650103 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.650856 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651124 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651426 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651522 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651568 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651932 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651952 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.651934 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.652434 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.653611 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.653885 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.654019 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.654342 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.661342 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.661655 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.661965 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.662016 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.662236 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.665328 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.665520 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.665639 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.665953 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.666594 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.668404 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.668588 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.669058 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.669101 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.669153 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.671809 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.672141 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.672162 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.672294 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.672432 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.672931 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673098 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673145 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673275 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673341 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673403 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673410 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-etcd-client\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673432 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-etcd-serving-ca\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673456 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd24p\" (UniqueName: \"kubernetes.io/projected/b4296ad5-b919-42fb-964b-3e067a376385-kube-api-access-fd24p\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673473 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8e442fd-56a6-49e5-b34d-86331dab75f4-node-pullsecrets\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673487 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673516 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673585 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673611 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673720 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673761 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673520 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr49m\" (UniqueName: \"kubernetes.io/projected/e8e442fd-56a6-49e5-b34d-86331dab75f4-kube-api-access-hr49m\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-etcd-client\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8e442fd-56a6-49e5-b34d-86331dab75f4-audit-dir\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673949 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.673984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.674210 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a654f3a-4b2a-408c-87c1-908616a79eb3-audit-dir\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.675799 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.675962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmrf\" (UniqueName: \"kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-audit\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676223 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-config\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-serving-cert\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676379 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbpdc\" (UniqueName: \"kubernetes.io/projected/6a654f3a-4b2a-408c-87c1-908616a79eb3-kube-api-access-xbpdc\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676458 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-encryption-config\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-encryption-config\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676908 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htt8j\" (UniqueName: \"kubernetes.io/projected/5046d5fc-693f-47bc-bae2-c3430c7e6b24-kube-api-access-htt8j\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.676981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-audit-policies\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.677060 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.677143 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-image-import-ca\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.677243 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-serving-cert\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.677316 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.682895 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-d8928"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.683599 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.684386 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.684597 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.686241 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.686342 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b6bc4"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.687020 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.694479 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.695041 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.695543 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.695573 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.696295 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.696573 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.698659 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.699460 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.701208 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.709415 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kplxq"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.709919 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-55qf7"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710289 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710384 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710710 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710725 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710822 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710852 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710737 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710776 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.711007 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.710777 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.711065 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.725645 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.726662 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.726764 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.726883 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.727404 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.727984 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.728740 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.729151 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.730497 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.731010 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.731785 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tvj7q"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.734220 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.734666 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.734937 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.770460 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.770798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.770847 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.771788 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.775021 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.777856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-auth-proxy-config\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.777903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c66c344-67c8-40f9-9acd-384c4de62c77-config\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.777929 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-console-config\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.777953 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wq6m\" (UniqueName: \"kubernetes.io/projected/076e302f-a426-407f-83bd-208837fd5d73-kube-api-access-9wq6m\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.777993 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778035 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgl7q\" (UniqueName: \"kubernetes.io/projected/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-kube-api-access-jgl7q\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778063 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778090 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-config\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778112 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-machine-approver-tls\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778133 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhdt\" (UniqueName: \"kubernetes.io/projected/fcbdc66f-c79c-4a57-a030-665f5320b182-kube-api-access-wvhdt\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778154 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b641df97-62aa-491f-88f2-b92a688e9854-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fc8a11b-111f-4853-8227-7718374e75e7-metrics-tls\") pod \"dns-operator-744455d44c-sqhmx\" (UID: \"1fc8a11b-111f-4853-8227-7718374e75e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfsv\" (UniqueName: \"kubernetes.io/projected/1fc8a11b-111f-4853-8227-7718374e75e7-kube-api-access-7kfsv\") pod \"dns-operator-744455d44c-sqhmx\" (UID: \"1fc8a11b-111f-4853-8227-7718374e75e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778220 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-dir\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6ff\" (UniqueName: \"kubernetes.io/projected/96509adf-07e9-4d9a-926e-2c10cfc69b31-kube-api-access-vd6ff\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qp8\" (UniqueName: \"kubernetes.io/projected/b641df97-62aa-491f-88f2-b92a688e9854-kube-api-access-88qp8\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-serving-cert\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbpdc\" (UniqueName: \"kubernetes.io/projected/6a654f3a-4b2a-408c-87c1-908616a79eb3-kube-api-access-xbpdc\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778335 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchfj\" (UniqueName: \"kubernetes.io/projected/076c2fbb-88e7-4df6-92ae-d4919b66f7e4-kube-api-access-qchfj\") pod \"cluster-samples-operator-665b6dd947-z5jf8\" (UID: \"076c2fbb-88e7-4df6-92ae-d4919b66f7e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-encryption-config\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778443 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-metrics-certs\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778524 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/076e302f-a426-407f-83bd-208837fd5d73-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778591 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-encryption-config\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778617 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-config\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-client\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htt8j\" (UniqueName: \"kubernetes.io/projected/5046d5fc-693f-47bc-bae2-c3430c7e6b24-kube-api-access-htt8j\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-audit-policies\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-serving-cert\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778828 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgmd\" (UniqueName: \"kubernetes.io/projected/8ae2744a-ce14-428c-a7bb-91b83a738577-kube-api-access-2vgmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778879 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c66c344-67c8-40f9-9acd-384c4de62c77-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-stats-auth\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778949 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/076c2fbb-88e7-4df6-92ae-d4919b66f7e4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z5jf8\" (UID: \"076c2fbb-88e7-4df6-92ae-d4919b66f7e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-image-import-ca\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.778998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae2744a-ce14-428c-a7bb-91b83a738577-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779047 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/076e302f-a426-407f-83bd-208837fd5d73-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779084 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-serving-cert\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779110 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779134 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd72ea9-e99c-4d99-914c-6984e54ee89f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-ca\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779183 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-service-ca\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-default-certificate\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3dd72ea9-e99c-4d99-914c-6984e54ee89f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn5sj\" (UniqueName: \"kubernetes.io/projected/f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8-kube-api-access-cn5sj\") pod \"downloads-7954f5f757-crmcv\" (UID: \"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8\") " pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779322 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96509adf-07e9-4d9a-926e-2c10cfc69b31-serving-cert\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779360 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-oauth-config\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779401 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779423 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-trusted-ca\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-config\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779473 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b641df97-62aa-491f-88f2-b92a688e9854-trusted-ca\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7rf6\" (UniqueName: \"kubernetes.io/projected/3dd72ea9-e99c-4d99-914c-6984e54ee89f-kube-api-access-m7rf6\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779532 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-policies\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779559 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-etcd-client\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779635 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-etcd-serving-ca\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd24p\" (UniqueName: \"kubernetes.io/projected/b4296ad5-b919-42fb-964b-3e067a376385-kube-api-access-fd24p\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8e442fd-56a6-49e5-b34d-86331dab75f4-node-pullsecrets\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbdc66f-c79c-4a57-a030-665f5320b182-service-ca-bundle\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779748 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr49m\" (UniqueName: \"kubernetes.io/projected/e8e442fd-56a6-49e5-b34d-86331dab75f4-kube-api-access-hr49m\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779843 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-etcd-client\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfb2l\" (UniqueName: \"kubernetes.io/projected/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-kube-api-access-zfb2l\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8e442fd-56a6-49e5-b34d-86331dab75f4-audit-dir\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779973 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.779994 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c66c344-67c8-40f9-9acd-384c4de62c77-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.780007 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.780587 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.780718 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-audit-policies\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.780836 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.781156 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.781444 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2kgqm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.781795 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.780015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8gfd\" (UniqueName: \"kubernetes.io/projected/a123cd77-f222-4d99-b77f-f11c6c323005-kube-api-access-b8gfd\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.781926 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.781940 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.781979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-serving-cert\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782008 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgf9\" (UniqueName: \"kubernetes.io/projected/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-kube-api-access-2lgf9\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b641df97-62aa-491f-88f2-b92a688e9854-metrics-tls\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782047 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782065 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a654f3a-4b2a-408c-87c1-908616a79eb3-audit-dir\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-config\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782118 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-trusted-ca-bundle\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782140 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-oauth-serving-cert\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/076e302f-a426-407f-83bd-208837fd5d73-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782190 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxmrf\" (UniqueName: \"kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782194 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae2744a-ce14-428c-a7bb-91b83a738577-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-service-ca\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-audit\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782424 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782867 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a654f3a-4b2a-408c-87c1-908616a79eb3-audit-dir\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.783046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-audit\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.782980 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.783281 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-image-import-ca\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.783563 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.783888 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.784058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a654f3a-4b2a-408c-87c1-908616a79eb3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.784158 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.784843 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785065 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-config\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-etcd-serving-ca\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785531 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8e442fd-56a6-49e5-b34d-86331dab75f4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785598 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e8e442fd-56a6-49e5-b34d-86331dab75f4-node-pullsecrets\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8e442fd-56a6-49e5-b34d-86331dab75f4-audit-dir\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.785266 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.786873 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-encryption-config\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.788234 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-etcd-client\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.788281 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lmlgb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.788750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.790123 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wn49x"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.790512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-serving-cert\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.790752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.790940 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6fpz6"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.791004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-etcd-client\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.791279 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e8e442fd-56a6-49e5-b34d-86331dab75f4-encryption-config\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.791684 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a654f3a-4b2a-408c-87c1-908616a79eb3-serving-cert\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.792035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.792276 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.792769 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.793983 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.794958 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.796079 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.796855 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.800236 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tpmtb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.800779 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.802412 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.803033 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.803617 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.804807 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5r45"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.805514 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.805851 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555524-2x297"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.806339 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.807022 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.807838 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.808876 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-crmcv"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.810262 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5hwpl"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.811486 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fzgfd"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.813446 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q8mpl"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.815526 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.818894 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g4lgw"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.819818 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.825307 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b6bc4"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.827629 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tvj7q"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.829993 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.833074 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hfzwb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.835104 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sqhmx"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.838102 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.838400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l4j4z"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.841252 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.842238 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.843257 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.844578 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.846271 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.847193 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555524-2x297"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.848575 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kplxq"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.850010 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tpmtb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.850896 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6fpz6"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.858869 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.858750 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.860048 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.860915 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.863131 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.863971 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2kgqm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.865107 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.866248 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.874561 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.875629 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.876663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lmlgb"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.877729 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.878070 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.879110 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dtxpv"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.880507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.880925 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2d6vc"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.881436 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.882147 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mpnwm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883276 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883478 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbdc66f-c79c-4a57-a030-665f5320b182-service-ca-bundle\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883513 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfb2l\" (UniqueName: \"kubernetes.io/projected/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-kube-api-access-zfb2l\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883625 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c66c344-67c8-40f9-9acd-384c4de62c77-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883651 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8gfd\" (UniqueName: \"kubernetes.io/projected/a123cd77-f222-4d99-b77f-f11c6c323005-kube-api-access-b8gfd\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883676 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-serving-cert\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883730 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lgf9\" (UniqueName: \"kubernetes.io/projected/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-kube-api-access-2lgf9\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b641df97-62aa-491f-88f2-b92a688e9854-metrics-tls\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883777 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/076e302f-a426-407f-83bd-208837fd5d73-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-config\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-trusted-ca-bundle\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-oauth-serving-cert\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883881 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae2744a-ce14-428c-a7bb-91b83a738577-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-service-ca\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883935 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-auth-proxy-config\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883960 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c66c344-67c8-40f9-9acd-384c4de62c77-config\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.883984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-console-config\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wq6m\" (UniqueName: \"kubernetes.io/projected/076e302f-a426-407f-83bd-208837fd5d73-kube-api-access-9wq6m\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884036 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgl7q\" (UniqueName: \"kubernetes.io/projected/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-kube-api-access-jgl7q\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884055 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-machine-approver-tls\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884118 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvhdt\" (UniqueName: \"kubernetes.io/projected/fcbdc66f-c79c-4a57-a030-665f5320b182-kube-api-access-wvhdt\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884139 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b641df97-62aa-491f-88f2-b92a688e9854-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fc8a11b-111f-4853-8227-7718374e75e7-metrics-tls\") pod \"dns-operator-744455d44c-sqhmx\" (UID: \"1fc8a11b-111f-4853-8227-7718374e75e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfsv\" (UniqueName: \"kubernetes.io/projected/1fc8a11b-111f-4853-8227-7718374e75e7-kube-api-access-7kfsv\") pod \"dns-operator-744455d44c-sqhmx\" (UID: \"1fc8a11b-111f-4853-8227-7718374e75e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884201 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-dir\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6ff\" (UniqueName: \"kubernetes.io/projected/96509adf-07e9-4d9a-926e-2c10cfc69b31-kube-api-access-vd6ff\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88qp8\" (UniqueName: \"kubernetes.io/projected/b641df97-62aa-491f-88f2-b92a688e9854-kube-api-access-88qp8\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qchfj\" (UniqueName: \"kubernetes.io/projected/076c2fbb-88e7-4df6-92ae-d4919b66f7e4-kube-api-access-qchfj\") pod \"cluster-samples-operator-665b6dd947-z5jf8\" (UID: \"076c2fbb-88e7-4df6-92ae-d4919b66f7e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-metrics-certs\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/076e302f-a426-407f-83bd-208837fd5d73-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884363 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-config\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884408 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-client\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-serving-cert\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.884744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-dir\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885029 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-config\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885493 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgmd\" (UniqueName: \"kubernetes.io/projected/8ae2744a-ce14-428c-a7bb-91b83a738577-kube-api-access-2vgmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885571 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c66c344-67c8-40f9-9acd-384c4de62c77-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-oauth-serving-cert\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885597 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/076c2fbb-88e7-4df6-92ae-d4919b66f7e4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z5jf8\" (UID: \"076c2fbb-88e7-4df6-92ae-d4919b66f7e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-stats-auth\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885652 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885677 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885705 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae2744a-ce14-428c-a7bb-91b83a738577-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885730 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/076e302f-a426-407f-83bd-208837fd5d73-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885757 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd72ea9-e99c-4d99-914c-6984e54ee89f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-ca\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885821 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-service-ca\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-trusted-ca-bundle\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885847 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3dd72ea9-e99c-4d99-914c-6984e54ee89f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885892 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885936 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn5sj\" (UniqueName: \"kubernetes.io/projected/f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8-kube-api-access-cn5sj\") pod \"downloads-7954f5f757-crmcv\" (UID: \"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8\") " pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885961 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-default-certificate\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885980 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96509adf-07e9-4d9a-926e-2c10cfc69b31-serving-cert\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.885996 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-oauth-config\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-trusted-ca\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886037 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-config\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b641df97-62aa-491f-88f2-b92a688e9854-trusted-ca\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886125 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7rf6\" (UniqueName: \"kubernetes.io/projected/3dd72ea9-e99c-4d99-914c-6984e54ee89f-kube-api-access-m7rf6\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886177 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-policies\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-console-config\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886697 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.886801 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-config\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.887070 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b641df97-62aa-491f-88f2-b92a688e9854-metrics-tls\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.887412 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.887618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-service-ca\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.887796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-auth-proxy-config\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.888020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/076e302f-a426-407f-83bd-208837fd5d73-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.888119 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b641df97-62aa-491f-88f2-b92a688e9854-trusted-ca\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.888311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-serving-cert\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.888892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae2744a-ce14-428c-a7bb-91b83a738577-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.888899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-ca\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.888984 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5r45"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.889027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.889281 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3dd72ea9-e99c-4d99-914c-6984e54ee89f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.889391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.889828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-config\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.890480 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-trusted-ca\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.890902 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-policies\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.891631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.891870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.892572 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-service-ca\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.894400 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96509adf-07e9-4d9a-926e-2c10cfc69b31-serving-cert\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.894940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.895408 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-serving-cert\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.895730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae2744a-ce14-428c-a7bb-91b83a738577-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.895882 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.895927 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-oauth-config\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.896697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.897103 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dtxpv"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.897933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/076c2fbb-88e7-4df6-92ae-d4919b66f7e4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z5jf8\" (UID: \"076c2fbb-88e7-4df6-92ae-d4919b66f7e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.898074 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mpnwm"] Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.898478 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.898700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd72ea9-e99c-4d99-914c-6984e54ee89f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.899430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/076e302f-a426-407f-83bd-208837fd5d73-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.900587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-machine-approver-tls\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.900630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96509adf-07e9-4d9a-926e-2c10cfc69b31-etcd-client\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.900660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.900810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.900856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.901088 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fc8a11b-111f-4853-8227-7718374e75e7-metrics-tls\") pod \"dns-operator-744455d44c-sqhmx\" (UID: \"1fc8a11b-111f-4853-8227-7718374e75e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.902517 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.904933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbdc66f-c79c-4a57-a030-665f5320b182-service-ca-bundle\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.918923 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.925253 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-stats-auth\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.942919 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.947240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-metrics-certs\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.958309 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.979987 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.985343 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fcbdc66f-c79c-4a57-a030-665f5320b182-default-certificate\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:07 crc kubenswrapper[4687]: I0312 16:04:07.998729 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.018547 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.038774 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.058508 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.064438 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c66c344-67c8-40f9-9acd-384c4de62c77-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.078287 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.098436 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.119110 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.125853 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c66c344-67c8-40f9-9acd-384c4de62c77-config\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.138693 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.158177 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.179428 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.198791 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.219005 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.238703 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.258651 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.279681 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.299086 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.319944 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.339831 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.359308 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.379854 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.399200 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.418475 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.439066 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.494586 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.515029 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.536157 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.556941 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.609778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbpdc\" (UniqueName: \"kubernetes.io/projected/6a654f3a-4b2a-408c-87c1-908616a79eb3-kube-api-access-xbpdc\") pod \"apiserver-7bbb656c7d-52fkb\" (UID: \"6a654f3a-4b2a-408c-87c1-908616a79eb3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.615895 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.628399 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.635586 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.655702 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.675890 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.696621 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.715967 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.732007 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.754550 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.756789 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.775969 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.779245 4687 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.779330 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert podName:b4296ad5-b919-42fb-964b-3e067a376385 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.279303601 +0000 UTC m=+98.243265945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-r67vt" (UID: "b4296ad5-b919-42fb-964b-3e067a376385") : failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.780717 4687 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.780774 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.28076113 +0000 UTC m=+98.244723474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.780821 4687 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.780859 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.280850463 +0000 UTC m=+98.244812807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.784789 4687 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.784909 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.284881672 +0000 UTC m=+98.248844016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.784955 4687 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.784987 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.284978584 +0000 UTC m=+98.248940928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785087 4687 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785132 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.285120778 +0000 UTC m=+98.249083122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785688 4687 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785774 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.285760786 +0000 UTC m=+98.249723130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785861 4687 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785903 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config podName:b4296ad5-b919-42fb-964b-3e067a376385 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.285891689 +0000 UTC m=+98.249854033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config") pod "openshift-apiserver-operator-796bbdcf4f-r67vt" (UID: "b4296ad5-b919-42fb-964b-3e067a376385") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785941 4687 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.785969 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.285960061 +0000 UTC m=+98.249922405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.786000 4687 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: E0312 16:04:08.786024 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:09.286016723 +0000 UTC m=+98.249979067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.793342 4687 request.go:700] Waited for 1.010758218s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.796229 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.814687 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.833492 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb"] Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.835463 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.854802 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.874978 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.894955 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.914489 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.935992 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.955762 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 16:04:08 crc kubenswrapper[4687]: I0312 16:04:08.995395 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.015615 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.034442 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.054956 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.074289 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.120457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr49m\" (UniqueName: \"kubernetes.io/projected/e8e442fd-56a6-49e5-b34d-86331dab75f4-kube-api-access-hr49m\") pod \"apiserver-76f77b778f-g4lgw\" (UID: \"e8e442fd-56a6-49e5-b34d-86331dab75f4\") " pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.136792 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.155547 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.171942 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" event={"ID":"6a654f3a-4b2a-408c-87c1-908616a79eb3","Type":"ContainerStarted","Data":"f8d2817dc23881f667ae830b1f7e5e267b934162b6b1ac63431dc7063636fffc"} Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.175897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.182925 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.195186 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.220942 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.235290 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.256806 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.275178 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.295233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303779 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303815 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303856 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303878 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.303958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.304035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.304069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.304110 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.324817 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.335179 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.355220 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.360159 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g4lgw"] Mar 12 16:04:09 crc kubenswrapper[4687]: W0312 16:04:09.367558 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e442fd_56a6_49e5_b34d_86331dab75f4.slice/crio-233a1ee624c10decf0ccad6d72dc2e825481b8c158e14430962f3dc6ac276be8 WatchSource:0}: Error finding container 233a1ee624c10decf0ccad6d72dc2e825481b8c158e14430962f3dc6ac276be8: Status 404 returned error can't find the container with id 233a1ee624c10decf0ccad6d72dc2e825481b8c158e14430962f3dc6ac276be8 Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.375528 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.394621 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.414400 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.435590 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.455691 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.475928 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.495388 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.515537 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.534746 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.554982 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.575601 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 16:04:09 crc kubenswrapper[4687]: E0312 16:04:09.595192 4687 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.595809 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.614764 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.634931 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.655235 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.674486 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.694503 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.714812 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.734045 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:04:09 crc kubenswrapper[4687]: E0312 16:04:09.734306 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.736164 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.755509 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.774509 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.793585 4687 request.go:700] Waited for 1.911968558s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.795162 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.815281 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.835603 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.855736 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.874468 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.895271 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.935448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfb2l\" (UniqueName: \"kubernetes.io/projected/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-kube-api-access-zfb2l\") pod \"oauth-openshift-558db77b4-q8mpl\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.955685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8gfd\" (UniqueName: \"kubernetes.io/projected/a123cd77-f222-4d99-b77f-f11c6c323005-kube-api-access-b8gfd\") pod \"console-f9d7485db-l4j4z\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.973534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c66c344-67c8-40f9-9acd-384c4de62c77-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9zhk9\" (UID: \"1c66c344-67c8-40f9-9acd-384c4de62c77\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:09 crc kubenswrapper[4687]: E0312 16:04:09.985813 4687 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:09 crc kubenswrapper[4687]: I0312 16:04:09.991243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lgf9\" (UniqueName: \"kubernetes.io/projected/ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe-kube-api-access-2lgf9\") pod \"console-operator-58897d9998-hfzwb\" (UID: \"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe\") " pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.013702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgl7q\" (UniqueName: \"kubernetes.io/projected/c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b-kube-api-access-jgl7q\") pod \"machine-approver-56656f9798-d8928\" (UID: \"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.029727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6ff\" (UniqueName: \"kubernetes.io/projected/96509adf-07e9-4d9a-926e-2c10cfc69b31-kube-api-access-vd6ff\") pod \"etcd-operator-b45778765-b6bc4\" (UID: \"96509adf-07e9-4d9a-926e-2c10cfc69b31\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.051910 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfsv\" (UniqueName: \"kubernetes.io/projected/1fc8a11b-111f-4853-8227-7718374e75e7-kube-api-access-7kfsv\") pod \"dns-operator-744455d44c-sqhmx\" (UID: \"1fc8a11b-111f-4853-8227-7718374e75e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.053474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.070567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88qp8\" (UniqueName: \"kubernetes.io/projected/b641df97-62aa-491f-88f2-b92a688e9854-kube-api-access-88qp8\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.091510 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qchfj\" (UniqueName: \"kubernetes.io/projected/076c2fbb-88e7-4df6-92ae-d4919b66f7e4-kube-api-access-qchfj\") pod \"cluster-samples-operator-665b6dd947-z5jf8\" (UID: \"076c2fbb-88e7-4df6-92ae-d4919b66f7e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.113710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/076e302f-a426-407f-83bd-208837fd5d73-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.130616 4687 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.142165 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wq6m\" (UniqueName: \"kubernetes.io/projected/076e302f-a426-407f-83bd-208837fd5d73-kube-api-access-9wq6m\") pod \"cluster-image-registry-operator-dc59b4c8b-fzz7k\" (UID: \"076e302f-a426-407f-83bd-208837fd5d73\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.150401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn5sj\" (UniqueName: \"kubernetes.io/projected/f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8-kube-api-access-cn5sj\") pod \"downloads-7954f5f757-crmcv\" (UID: \"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8\") " pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.173775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b641df97-62aa-491f-88f2-b92a688e9854-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4n7wm\" (UID: \"b641df97-62aa-491f-88f2-b92a688e9854\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.175896 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.190182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvhdt\" (UniqueName: \"kubernetes.io/projected/fcbdc66f-c79c-4a57-a030-665f5320b182-kube-api-access-wvhdt\") pod \"router-default-5444994796-55qf7\" (UID: \"fcbdc66f-c79c-4a57-a030-665f5320b182\") " pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.196190 4687 generic.go:334] "Generic (PLEG): container finished" podID="6a654f3a-4b2a-408c-87c1-908616a79eb3" containerID="b0395a716291c555f45c4ee1039a6a7158dbe3887aab4a4f138ee725eda87616" exitCode=0 Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.196283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" event={"ID":"6a654f3a-4b2a-408c-87c1-908616a79eb3","Type":"ContainerDied","Data":"b0395a716291c555f45c4ee1039a6a7158dbe3887aab4a4f138ee725eda87616"} Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.200427 4687 generic.go:334] "Generic (PLEG): container finished" podID="e8e442fd-56a6-49e5-b34d-86331dab75f4" containerID="af5827b9175003703390e84a40a14efc3c9a35ba6be8c68520fd016c087508c3" exitCode=0 Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.200479 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" event={"ID":"e8e442fd-56a6-49e5-b34d-86331dab75f4","Type":"ContainerDied","Data":"af5827b9175003703390e84a40a14efc3c9a35ba6be8c68520fd016c087508c3"} Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.200514 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" event={"ID":"e8e442fd-56a6-49e5-b34d-86331dab75f4","Type":"ContainerStarted","Data":"233a1ee624c10decf0ccad6d72dc2e825481b8c158e14430962f3dc6ac276be8"} Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.209738 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.209917 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.212961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7rf6\" (UniqueName: \"kubernetes.io/projected/3dd72ea9-e99c-4d99-914c-6984e54ee89f-kube-api-access-m7rf6\") pod \"openshift-config-operator-7777fb866f-qkz6l\" (UID: \"3dd72ea9-e99c-4d99-914c-6984e54ee89f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.218780 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" Mar 12 16:04:10 crc kubenswrapper[4687]: W0312 16:04:10.223809 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c66c344_67c8_40f9_9acd_384c4de62c77.slice/crio-6f0fa0a73ec248b0e7b9168467581a5545abe40b603e1d2f2a27b7959c8fb7af WatchSource:0}: Error finding container 6f0fa0a73ec248b0e7b9168467581a5545abe40b603e1d2f2a27b7959c8fb7af: Status 404 returned error can't find the container with id 6f0fa0a73ec248b0e7b9168467581a5545abe40b603e1d2f2a27b7959c8fb7af Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.235390 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.236016 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgmd\" (UniqueName: \"kubernetes.io/projected/8ae2744a-ce14-428c-a7bb-91b83a738577-kube-api-access-2vgmd\") pod \"openshift-controller-manager-operator-756b6f6bc6-2w2g2\" (UID: \"8ae2744a-ce14-428c-a7bb-91b83a738577\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.236228 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.250699 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.250824 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.255592 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.259499 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.266727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.275968 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.287166 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304340 4687 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304398 4687 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304440 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304421487 +0000 UTC m=+100.268383841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304486 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304462768 +0000 UTC m=+100.268425212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304524 4687 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304559 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config podName:b4296ad5-b919-42fb-964b-3e067a376385 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.30454992 +0000 UTC m=+100.268512404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config") pod "openshift-apiserver-operator-796bbdcf4f-r67vt" (UID: "b4296ad5-b919-42fb-964b-3e067a376385") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304584 4687 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304609 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304602401 +0000 UTC m=+100.268564875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304638 4687 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304661 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304654453 +0000 UTC m=+100.268616937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304689 4687 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304716 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304707524 +0000 UTC m=+100.268669998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304721 4687 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304737 4687 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304749 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304741806 +0000 UTC m=+100.268704150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304766 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert podName:b4296ad5-b919-42fb-964b-3e067a376385 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304758326 +0000 UTC m=+100.268720800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-r67vt" (UID: "b4296ad5-b919-42fb-964b-3e067a376385") : failed to sync secret cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304779 4687 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.304808 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.304802918 +0000 UTC m=+100.268765252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.315229 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d308d912-7b47-4156-95f8-db78dc53a205-config\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318346 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d308d912-7b47-4156-95f8-db78dc53a205-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318421 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-trusted-ca\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318447 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wtpg\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-kube-api-access-5wtpg\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318517 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4907a0ff-69b9-4d86-8d43-b39ff4af8567-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318549 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d308d912-7b47-4156-95f8-db78dc53a205-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-bound-sa-token\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4907a0ff-69b9-4d86-8d43-b39ff4af8567-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-certificates\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.318753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-tls\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.319005 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:10.818992701 +0000 UTC m=+99.782955045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.320748 4687 projected.go:194] Error preparing data for projected volume kube-api-access-fd24p for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.320799 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b4296ad5-b919-42fb-964b-3e067a376385-kube-api-access-fd24p podName:b4296ad5-b919-42fb-964b-3e067a376385 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:10.820784949 +0000 UTC m=+99.784747383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fd24p" (UniqueName: "kubernetes.io/projected/b4296ad5-b919-42fb-964b-3e067a376385-kube-api-access-fd24p") pod "openshift-apiserver-operator-796bbdcf4f-r67vt" (UID: "b4296ad5-b919-42fb-964b-3e067a376385") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.334441 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.338421 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.339646 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:10 crc kubenswrapper[4687]: W0312 16:04:10.340049 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fbeb5c_442a_4f3b_ad4e_2f53ecd9197b.slice/crio-b2c8bcedc678cef1a137f1d8102d9909ac2664c984bb36d474b3bd1da76d73aa WatchSource:0}: Error finding container b2c8bcedc678cef1a137f1d8102d9909ac2664c984bb36d474b3bd1da76d73aa: Status 404 returned error can't find the container with id b2c8bcedc678cef1a137f1d8102d9909ac2664c984bb36d474b3bd1da76d73aa Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.354850 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.375394 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.390895 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q8mpl"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.395616 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419191 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419426 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkjj\" (UniqueName: \"kubernetes.io/projected/0cf80ba9-ad15-48b8-ae26-f734183c8d30-kube-api-access-6lkjj\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419448 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ef74c4e-4360-451e-afe7-b17aa607dc79-proxy-tls\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e075a2cc-590a-4ac7-a6d5-c02336912013-config\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419541 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ef74c4e-4360-451e-afe7-b17aa607dc79-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b7jg\" (UniqueName: \"kubernetes.io/projected/54b72843-9c3b-48ea-b74c-5a8b0872e66d-kube-api-access-6b7jg\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-certs\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419591 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfc2e833-0f84-45b3-8caa-a5ce8d7579a4-cert\") pod \"ingress-canary-2kgqm\" (UID: \"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4\") " pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419643 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfxr\" (UniqueName: \"kubernetes.io/projected/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-kube-api-access-hbfxr\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419658 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6fcn\" (UniqueName: \"kubernetes.io/projected/43459f1d-71b0-484e-9944-73600cec2685-kube-api-access-j6fcn\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419710 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sbfv\" (UniqueName: \"kubernetes.io/projected/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-kube-api-access-2sbfv\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419724 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-signing-key\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-secret-volume\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwbqn\" (UniqueName: \"kubernetes.io/projected/79db3510-528b-4ead-a358-09f79a850a5c-kube-api-access-rwbqn\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419827 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40314ee9-1a5d-4917-85eb-73c6959fee43-config-volume\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419841 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40314ee9-1a5d-4917-85eb-73c6959fee43-metrics-tls\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-tmpfs\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419905 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchh6\" (UniqueName: \"kubernetes.io/projected/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b-kube-api-access-dchh6\") pod \"auto-csr-approver-29555524-2x297\" (UID: \"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b\") " pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc9m4\" (UniqueName: \"kubernetes.io/projected/8955cbff-df56-4d9d-8191-6863b7fde61e-kube-api-access-tc9m4\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419944 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgv7h\" (UniqueName: \"kubernetes.io/projected/fd93502e-13cb-47b4-b70e-1fcaacc70ca1-kube-api-access-mgv7h\") pod \"control-plane-machine-set-operator-78cbb6b69f-g4cls\" (UID: \"fd93502e-13cb-47b4-b70e-1fcaacc70ca1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43459f1d-71b0-484e-9944-73600cec2685-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.419992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqqs\" (UniqueName: \"kubernetes.io/projected/a2b01ec2-7de4-41ae-9510-b66cf169dc5a-kube-api-access-vsqqs\") pod \"migrator-59844c95c7-ptdh2\" (UID: \"a2b01ec2-7de4-41ae-9510-b66cf169dc5a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420031 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/54b72843-9c3b-48ea-b74c-5a8b0872e66d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-node-bootstrap-token\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-csi-data-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420092 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e75a98a-387a-4b8e-9b01-dc788830f478-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-config\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420193 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-bound-sa-token\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420217 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-plugins-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420251 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/54b72843-9c3b-48ea-b74c-5a8b0872e66d-srv-cert\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79db3510-528b-4ead-a358-09f79a850a5c-serving-cert\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420308 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43459f1d-71b0-484e-9944-73600cec2685-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420324 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4907a0ff-69b9-4d86-8d43-b39ff4af8567-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2j6\" (UniqueName: \"kubernetes.io/projected/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-kube-api-access-kf2j6\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420444 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmgt\" (UniqueName: \"kubernetes.io/projected/f865d3a8-d05b-47c7-a131-31849e5d82ad-kube-api-access-vbmgt\") pod \"package-server-manager-789f6589d5-2dm8z\" (UID: \"f865d3a8-d05b-47c7-a131-31849e5d82ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420460 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgnf\" (UniqueName: \"kubernetes.io/projected/235d7664-1ad8-4601-b279-1b8ff86f0bf7-kube-api-access-hkgnf\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420482 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc8bc5a5-2813-4313-b5c9-268a6677e889-proxy-tls\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rkt\" (UniqueName: \"kubernetes.io/projected/bfc2e833-0f84-45b3-8caa-a5ce8d7579a4-kube-api-access-57rkt\") pod \"ingress-canary-2kgqm\" (UID: \"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4\") " pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-config-volume\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q5pf\" (UniqueName: \"kubernetes.io/projected/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-kube-api-access-5q5pf\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd93502e-13cb-47b4-b70e-1fcaacc70ca1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g4cls\" (UID: \"fd93502e-13cb-47b4-b70e-1fcaacc70ca1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8e75a98a-387a-4b8e-9b01-dc788830f478-ready\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420589 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-apiservice-cert\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420614 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbtgz\" (UniqueName: \"kubernetes.io/projected/2ef74c4e-4360-451e-afe7-b17aa607dc79-kube-api-access-tbtgz\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d308d912-7b47-4156-95f8-db78dc53a205-config\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420811 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d308d912-7b47-4156-95f8-db78dc53a205-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-trusted-ca\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79db3510-528b-4ead-a358-09f79a850a5c-config\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.420957 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-socket-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.421211 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.422947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wtpg\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-kube-api-access-5wtpg\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-signing-cabundle\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqd52\" (UniqueName: \"kubernetes.io/projected/8e75a98a-387a-4b8e-9b01-dc788830f478-kube-api-access-cqd52\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-client-ca\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-mountpoint-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423425 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e4943d4-469b-431d-a905-753b44ac01e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423464 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f885e731-aeb8-4832-a0f4-fffe5c762592-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tpmtb\" (UID: \"f885e731-aeb8-4832-a0f4-fffe5c762592\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423484 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4907a0ff-69b9-4d86-8d43-b39ff4af8567-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-registration-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e075a2cc-590a-4ac7-a6d5-c02336912013-images\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423568 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e075a2cc-590a-4ac7-a6d5-c02336912013-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423588 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmdk\" (UniqueName: \"kubernetes.io/projected/e075a2cc-590a-4ac7-a6d5-c02336912013-kube-api-access-8rmdk\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423629 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d308d912-7b47-4156-95f8-db78dc53a205-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8dq\" (UniqueName: \"kubernetes.io/projected/dc8bc5a5-2813-4313-b5c9-268a6677e889-kube-api-access-fl8dq\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423668 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqp5c\" (UniqueName: \"kubernetes.io/projected/be3e1335-148c-422f-82e3-5167ab0990fc-kube-api-access-jqp5c\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444h6\" (UniqueName: \"kubernetes.io/projected/f885e731-aeb8-4832-a0f4-fffe5c762592-kube-api-access-444h6\") pod \"multus-admission-controller-857f4d67dd-tpmtb\" (UID: \"f885e731-aeb8-4832-a0f4-fffe5c762592\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-webhook-cert\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423929 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/235d7664-1ad8-4601-b279-1b8ff86f0bf7-srv-cert\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.423973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/235d7664-1ad8-4601-b279-1b8ff86f0bf7-profile-collector-cert\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424017 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f865d3a8-d05b-47c7-a131-31849e5d82ad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2dm8z\" (UID: \"f865d3a8-d05b-47c7-a131-31849e5d82ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3e1335-148c-422f-82e3-5167ab0990fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc8bc5a5-2813-4313-b5c9-268a6677e889-images\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc8bc5a5-2813-4313-b5c9-268a6677e889-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-certificates\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424232 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4943d4-469b-431d-a905-753b44ac01e3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtp6\" (UniqueName: \"kubernetes.io/projected/40314ee9-1a5d-4917-85eb-73c6959fee43-kube-api-access-crtp6\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4943d4-469b-431d-a905-753b44ac01e3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424408 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-tls\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.424446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e75a98a-387a-4b8e-9b01-dc788830f478-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.432414 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-certificates\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.444196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4907a0ff-69b9-4d86-8d43-b39ff4af8567-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.444869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d308d912-7b47-4156-95f8-db78dc53a205-config\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.457745 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:10.957724699 +0000 UTC m=+99.921687043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.458308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d308d912-7b47-4156-95f8-db78dc53a205-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.461148 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.461582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4907a0ff-69b9-4d86-8d43-b39ff4af8567-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.465300 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.465613 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-trusted-ca\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.467145 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-tls\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.476338 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.477915 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.507348 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.515474 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528081 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-client-ca\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-mountpoint-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e4943d4-469b-431d-a905-753b44ac01e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f885e731-aeb8-4832-a0f4-fffe5c762592-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tpmtb\" (UID: \"f885e731-aeb8-4832-a0f4-fffe5c762592\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528417 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-registration-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e075a2cc-590a-4ac7-a6d5-c02336912013-images\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e075a2cc-590a-4ac7-a6d5-c02336912013-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmdk\" (UniqueName: \"kubernetes.io/projected/e075a2cc-590a-4ac7-a6d5-c02336912013-kube-api-access-8rmdk\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8dq\" (UniqueName: \"kubernetes.io/projected/dc8bc5a5-2813-4313-b5c9-268a6677e889-kube-api-access-fl8dq\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528577 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqp5c\" (UniqueName: \"kubernetes.io/projected/be3e1335-148c-422f-82e3-5167ab0990fc-kube-api-access-jqp5c\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528624 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444h6\" (UniqueName: \"kubernetes.io/projected/f885e731-aeb8-4832-a0f4-fffe5c762592-kube-api-access-444h6\") pod \"multus-admission-controller-857f4d67dd-tpmtb\" (UID: \"f885e731-aeb8-4832-a0f4-fffe5c762592\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528655 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-webhook-cert\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528670 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/235d7664-1ad8-4601-b279-1b8ff86f0bf7-srv-cert\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/235d7664-1ad8-4601-b279-1b8ff86f0bf7-profile-collector-cert\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f865d3a8-d05b-47c7-a131-31849e5d82ad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2dm8z\" (UID: \"f865d3a8-d05b-47c7-a131-31849e5d82ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc8bc5a5-2813-4313-b5c9-268a6677e889-images\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528742 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc8bc5a5-2813-4313-b5c9-268a6677e889-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528756 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3e1335-148c-422f-82e3-5167ab0990fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4943d4-469b-431d-a905-753b44ac01e3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528798 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crtp6\" (UniqueName: \"kubernetes.io/projected/40314ee9-1a5d-4917-85eb-73c6959fee43-kube-api-access-crtp6\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528816 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528837 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4943d4-469b-431d-a905-753b44ac01e3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.528853 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-registration-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.529191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-mountpoint-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.529337 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.029323633 +0000 UTC m=+99.993285977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.530188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-client-ca\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.530896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc8bc5a5-2813-4313-b5c9-268a6677e889-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.531252 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc8bc5a5-2813-4313-b5c9-268a6677e889-images\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.531813 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hfzwb"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.532016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e75a98a-387a-4b8e-9b01-dc788830f478-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.532103 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.532414 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3e1335-148c-422f-82e3-5167ab0990fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.532501 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lkjj\" (UniqueName: \"kubernetes.io/projected/0cf80ba9-ad15-48b8-ae26-f734183c8d30-kube-api-access-6lkjj\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.532529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ef74c4e-4360-451e-afe7-b17aa607dc79-proxy-tls\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.534719 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/235d7664-1ad8-4601-b279-1b8ff86f0bf7-srv-cert\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.534940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e075a2cc-590a-4ac7-a6d5-c02336912013-config\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536011 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e075a2cc-590a-4ac7-a6d5-c02336912013-images\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ef74c4e-4360-451e-afe7-b17aa607dc79-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7jg\" (UniqueName: \"kubernetes.io/projected/54b72843-9c3b-48ea-b74c-5a8b0872e66d-kube-api-access-6b7jg\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536470 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-certs\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536485 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfc2e833-0f84-45b3-8caa-a5ce8d7579a4-cert\") pod \"ingress-canary-2kgqm\" (UID: \"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4\") " pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536521 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfxr\" (UniqueName: \"kubernetes.io/projected/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-kube-api-access-hbfxr\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536540 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6fcn\" (UniqueName: \"kubernetes.io/projected/43459f1d-71b0-484e-9944-73600cec2685-kube-api-access-j6fcn\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sbfv\" (UniqueName: \"kubernetes.io/projected/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-kube-api-access-2sbfv\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-signing-key\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-secret-volume\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwbqn\" (UniqueName: \"kubernetes.io/projected/79db3510-528b-4ead-a358-09f79a850a5c-kube-api-access-rwbqn\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40314ee9-1a5d-4917-85eb-73c6959fee43-config-volume\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40314ee9-1a5d-4917-85eb-73c6959fee43-metrics-tls\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-tmpfs\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchh6\" (UniqueName: \"kubernetes.io/projected/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b-kube-api-access-dchh6\") pod \"auto-csr-approver-29555524-2x297\" (UID: \"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b\") " pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536730 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc9m4\" (UniqueName: \"kubernetes.io/projected/8955cbff-df56-4d9d-8191-6863b7fde61e-kube-api-access-tc9m4\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgv7h\" (UniqueName: \"kubernetes.io/projected/fd93502e-13cb-47b4-b70e-1fcaacc70ca1-kube-api-access-mgv7h\") pod \"control-plane-machine-set-operator-78cbb6b69f-g4cls\" (UID: \"fd93502e-13cb-47b4-b70e-1fcaacc70ca1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536773 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43459f1d-71b0-484e-9944-73600cec2685-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsqqs\" (UniqueName: \"kubernetes.io/projected/a2b01ec2-7de4-41ae-9510-b66cf169dc5a-kube-api-access-vsqqs\") pod \"migrator-59844c95c7-ptdh2\" (UID: \"a2b01ec2-7de4-41ae-9510-b66cf169dc5a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/54b72843-9c3b-48ea-b74c-5a8b0872e66d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536851 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-node-bootstrap-token\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-csi-data-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e75a98a-387a-4b8e-9b01-dc788830f478-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-config\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536933 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-plugins-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/54b72843-9c3b-48ea-b74c-5a8b0872e66d-srv-cert\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79db3510-528b-4ead-a358-09f79a850a5c-serving-cert\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.536986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43459f1d-71b0-484e-9944-73600cec2685-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537014 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2j6\" (UniqueName: \"kubernetes.io/projected/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-kube-api-access-kf2j6\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537018 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e075a2cc-590a-4ac7-a6d5-c02336912013-config\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbmgt\" (UniqueName: \"kubernetes.io/projected/f865d3a8-d05b-47c7-a131-31849e5d82ad-kube-api-access-vbmgt\") pod \"package-server-manager-789f6589d5-2dm8z\" (UID: \"f865d3a8-d05b-47c7-a131-31849e5d82ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537059 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgnf\" (UniqueName: \"kubernetes.io/projected/235d7664-1ad8-4601-b279-1b8ff86f0bf7-kube-api-access-hkgnf\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537081 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc8bc5a5-2813-4313-b5c9-268a6677e889-proxy-tls\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rkt\" (UniqueName: \"kubernetes.io/projected/bfc2e833-0f84-45b3-8caa-a5ce8d7579a4-kube-api-access-57rkt\") pod \"ingress-canary-2kgqm\" (UID: \"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4\") " pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-config-volume\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537128 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q5pf\" (UniqueName: \"kubernetes.io/projected/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-kube-api-access-5q5pf\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd93502e-13cb-47b4-b70e-1fcaacc70ca1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g4cls\" (UID: \"fd93502e-13cb-47b4-b70e-1fcaacc70ca1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537161 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8e75a98a-387a-4b8e-9b01-dc788830f478-ready\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537191 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-apiservice-cert\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537209 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbtgz\" (UniqueName: \"kubernetes.io/projected/2ef74c4e-4360-451e-afe7-b17aa607dc79-kube-api-access-tbtgz\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537256 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79db3510-528b-4ead-a358-09f79a850a5c-config\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-socket-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-signing-cabundle\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.537319 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqd52\" (UniqueName: \"kubernetes.io/projected/8e75a98a-387a-4b8e-9b01-dc788830f478-kube-api-access-cqd52\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.538196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/235d7664-1ad8-4601-b279-1b8ff86f0bf7-profile-collector-cert\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.539000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f885e731-aeb8-4832-a0f4-fffe5c762592-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tpmtb\" (UID: \"f885e731-aeb8-4832-a0f4-fffe5c762592\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.539797 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ef74c4e-4360-451e-afe7-b17aa607dc79-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.539800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40314ee9-1a5d-4917-85eb-73c6959fee43-config-volume\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.540303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4943d4-469b-431d-a905-753b44ac01e3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.542484 4687 projected.go:194] Error preparing data for projected volume kube-api-access-htt8j for pod openshift-authentication-operator/authentication-operator-69f744f599-5hwpl: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.542543 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5046d5fc-693f-47bc-bae2-c3430c7e6b24-kube-api-access-htt8j podName:5046d5fc-693f-47bc-bae2-c3430c7e6b24 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.04252915 +0000 UTC m=+100.006491494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-htt8j" (UniqueName: "kubernetes.io/projected/5046d5fc-693f-47bc-bae2-c3430c7e6b24-kube-api-access-htt8j") pod "authentication-operator-69f744f599-5hwpl" (UID: "5046d5fc-693f-47bc-bae2-c3430c7e6b24") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.542827 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f865d3a8-d05b-47c7-a131-31849e5d82ad-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2dm8z\" (UID: \"f865d3a8-d05b-47c7-a131-31849e5d82ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.543165 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e75a98a-387a-4b8e-9b01-dc788830f478-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.546002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-csi-data-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.546228 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e75a98a-387a-4b8e-9b01-dc788830f478-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.546706 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-certs\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.546852 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-plugins-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.547125 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e075a2cc-590a-4ac7-a6d5-c02336912013-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.547148 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/54b72843-9c3b-48ea-b74c-5a8b0872e66d-srv-cert\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.547665 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-config\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.547743 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-tmpfs\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.547854 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-config-volume\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.547901 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8e75a98a-387a-4b8e-9b01-dc788830f478-ready\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.548423 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79db3510-528b-4ead-a358-09f79a850a5c-config\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.548536 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8955cbff-df56-4d9d-8191-6863b7fde61e-socket-dir\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.549062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43459f1d-71b0-484e-9944-73600cec2685-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.549683 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-signing-cabundle\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.550184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ef74c4e-4360-451e-afe7-b17aa607dc79-proxy-tls\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.550318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-signing-key\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.550679 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43459f1d-71b0-484e-9944-73600cec2685-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.550757 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4943d4-469b-431d-a905-753b44ac01e3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.550812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-webhook-cert\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.551877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-node-bootstrap-token\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.552591 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-crmcv"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.553292 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40314ee9-1a5d-4917-85eb-73c6959fee43-metrics-tls\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.554269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfc2e833-0f84-45b3-8caa-a5ce8d7579a4-cert\") pod \"ingress-canary-2kgqm\" (UID: \"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4\") " pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.554681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.554698 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.555042 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-apiservice-cert\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.558867 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc8bc5a5-2813-4313-b5c9-268a6677e889-proxy-tls\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.560587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/54b72843-9c3b-48ea-b74c-5a8b0872e66d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.562578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-secret-volume\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.564069 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd93502e-13cb-47b4-b70e-1fcaacc70ca1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g4cls\" (UID: \"fd93502e-13cb-47b4-b70e-1fcaacc70ca1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.568247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b6bc4"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.571843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79db3510-528b-4ead-a358-09f79a850a5c-serving-cert\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.578682 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.593732 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.597105 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.615327 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 16:04:10 crc kubenswrapper[4687]: W0312 16:04:10.616412 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf62af0c9_1bcc_4f36_bed0_85bf8a22f4a8.slice/crio-f3805bbe40e74fb30dd2c1e00ad7d64b63bb05ae54c1d4ecc6aecdcdb8bb89c3 WatchSource:0}: Error finding container f3805bbe40e74fb30dd2c1e00ad7d64b63bb05ae54c1d4ecc6aecdcdb8bb89c3: Status 404 returned error can't find the container with id f3805bbe40e74fb30dd2c1e00ad7d64b63bb05ae54c1d4ecc6aecdcdb8bb89c3 Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.634612 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.639175 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.639308 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.139288554 +0000 UTC m=+100.103250898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.640032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.640864 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.140825816 +0000 UTC m=+100.104788160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.654573 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.660699 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.674421 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:04:10 crc kubenswrapper[4687]: W0312 16:04:10.676202 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod076e302f_a426_407f_83bd_208837fd5d73.slice/crio-50debc77444319aba19075cd4511ef8d6339d80f87d418da54785d7b6ffeb90e WatchSource:0}: Error finding container 50debc77444319aba19075cd4511ef8d6339d80f87d418da54785d7b6ffeb90e: Status 404 returned error can't find the container with id 50debc77444319aba19075cd4511ef8d6339d80f87d418da54785d7b6ffeb90e Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.676416 4687 projected.go:194] Error preparing data for projected volume kube-api-access-rxmrf for pod openshift-controller-manager/controller-manager-879f6c89f-fzgfd: failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.676476 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf podName:f4b69a64-a9c5-41e6-81f6-15149754f232 nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.176459179 +0000 UTC m=+100.140421523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxmrf" (UniqueName: "kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf") pod "controller-manager-879f6c89f-fzgfd" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232") : failed to sync configmap cache: timed out waiting for the condition Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.678745 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sqhmx"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.730064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-bound-sa-token\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.740694 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.740861 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.240833918 +0000 UTC m=+100.204796262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.740996 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.741538 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.241530076 +0000 UTC m=+100.205492420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.750949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wtpg\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-kube-api-access-5wtpg\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.783551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d308d912-7b47-4156-95f8-db78dc53a205-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b79zk\" (UID: \"d308d912-7b47-4156-95f8-db78dc53a205\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.796877 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.809137 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crtp6\" (UniqueName: \"kubernetes.io/projected/40314ee9-1a5d-4917-85eb-73c6959fee43-kube-api-access-crtp6\") pod \"dns-default-mpnwm\" (UID: \"40314ee9-1a5d-4917-85eb-73c6959fee43\") " pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.814677 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqp5c\" (UniqueName: \"kubernetes.io/projected/be3e1335-148c-422f-82e3-5167ab0990fc-kube-api-access-jqp5c\") pod \"route-controller-manager-6576b87f9c-w8zkr\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.831724 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmdk\" (UniqueName: \"kubernetes.io/projected/e075a2cc-590a-4ac7-a6d5-c02336912013-kube-api-access-8rmdk\") pod \"machine-api-operator-5694c8668f-tvj7q\" (UID: \"e075a2cc-590a-4ac7-a6d5-c02336912013\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.837677 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l4j4z"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.844965 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.845252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd24p\" (UniqueName: \"kubernetes.io/projected/b4296ad5-b919-42fb-964b-3e067a376385-kube-api-access-fd24p\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.846900 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.346872753 +0000 UTC m=+100.310835097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.857017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8dq\" (UniqueName: \"kubernetes.io/projected/dc8bc5a5-2813-4313-b5c9-268a6677e889-kube-api-access-fl8dq\") pod \"machine-config-operator-74547568cd-4m88f\" (UID: \"dc8bc5a5-2813-4313-b5c9-268a6677e889\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:10 crc kubenswrapper[4687]: W0312 16:04:10.858822 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda123cd77_f222_4d99_b77f_f11c6c323005.slice/crio-bc3c9ba518809a99fb890d46b219b9c4fa49ffc34509418f8a16d9d875674fc4 WatchSource:0}: Error finding container bc3c9ba518809a99fb890d46b219b9c4fa49ffc34509418f8a16d9d875674fc4: Status 404 returned error can't find the container with id bc3c9ba518809a99fb890d46b219b9c4fa49ffc34509418f8a16d9d875674fc4 Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.865203 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm"] Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.866014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd24p\" (UniqueName: \"kubernetes.io/projected/b4296ad5-b919-42fb-964b-3e067a376385-kube-api-access-fd24p\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.875243 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.875983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444h6\" (UniqueName: \"kubernetes.io/projected/f885e731-aeb8-4832-a0f4-fffe5c762592-kube-api-access-444h6\") pod \"multus-admission-controller-857f4d67dd-tpmtb\" (UID: \"f885e731-aeb8-4832-a0f4-fffe5c762592\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.893146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e4943d4-469b-431d-a905-753b44ac01e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6s68k\" (UID: \"0e4943d4-469b-431d-a905-753b44ac01e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.914407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lkjj\" (UniqueName: \"kubernetes.io/projected/0cf80ba9-ad15-48b8-ae26-f734183c8d30-kube-api-access-6lkjj\") pod \"marketplace-operator-79b997595-lmlgb\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.930977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqd52\" (UniqueName: \"kubernetes.io/projected/8e75a98a-387a-4b8e-9b01-dc788830f478-kube-api-access-cqd52\") pod \"cni-sysctl-allowlist-ds-wn49x\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.949156 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:10 crc kubenswrapper[4687]: E0312 16:04:10.949655 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.449632649 +0000 UTC m=+100.413594993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.960910 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.964856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwbqn\" (UniqueName: \"kubernetes.io/projected/79db3510-528b-4ead-a358-09f79a850a5c-kube-api-access-rwbqn\") pod \"service-ca-operator-777779d784-f5r45\" (UID: \"79db3510-528b-4ead-a358-09f79a850a5c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.968528 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.969731 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc9m4\" (UniqueName: \"kubernetes.io/projected/8955cbff-df56-4d9d-8191-6863b7fde61e-kube-api-access-tc9m4\") pod \"csi-hostpathplugin-dtxpv\" (UID: \"8955cbff-df56-4d9d-8191-6863b7fde61e\") " pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.976932 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.993770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfxr\" (UniqueName: \"kubernetes.io/projected/3ad7feb5-e16a-43c8-9b7e-77d03f7d5979-kube-api-access-hbfxr\") pod \"machine-config-server-2d6vc\" (UID: \"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979\") " pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:10 crc kubenswrapper[4687]: I0312 16:04:10.993790 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2"] Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.013824 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7jg\" (UniqueName: \"kubernetes.io/projected/54b72843-9c3b-48ea-b74c-5a8b0872e66d-kube-api-access-6b7jg\") pod \"olm-operator-6b444d44fb-nz548\" (UID: \"54b72843-9c3b-48ea-b74c-5a8b0872e66d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:11 crc kubenswrapper[4687]: W0312 16:04:11.027472 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae2744a_ce14_428c_a7bb_91b83a738577.slice/crio-7c219235629ceaff2389cb5e0603518f5d486e75b8b5f8d3df16569ac41ac061 WatchSource:0}: Error finding container 7c219235629ceaff2389cb5e0603518f5d486e75b8b5f8d3df16569ac41ac061: Status 404 returned error can't find the container with id 7c219235629ceaff2389cb5e0603518f5d486e75b8b5f8d3df16569ac41ac061 Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.046387 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.047184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6fcn\" (UniqueName: \"kubernetes.io/projected/43459f1d-71b0-484e-9944-73600cec2685-kube-api-access-j6fcn\") pod \"kube-storage-version-migrator-operator-b67b599dd-4g2zm\" (UID: \"43459f1d-71b0-484e-9944-73600cec2685\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.050202 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.050383 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htt8j\" (UniqueName: \"kubernetes.io/projected/5046d5fc-693f-47bc-bae2-c3430c7e6b24-kube-api-access-htt8j\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.051343 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.551288776 +0000 UTC m=+100.515251120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.051972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sbfv\" (UniqueName: \"kubernetes.io/projected/8dd77a62-4c2f-409c-af2a-bac22eb53ddc-kube-api-access-2sbfv\") pod \"packageserver-d55dfcdfc-g6vp4\" (UID: \"8dd77a62-4c2f-409c-af2a-bac22eb53ddc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.056394 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htt8j\" (UniqueName: \"kubernetes.io/projected/5046d5fc-693f-47bc-bae2-c3430c7e6b24-kube-api-access-htt8j\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.066576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.076912 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.090409 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgv7h\" (UniqueName: \"kubernetes.io/projected/fd93502e-13cb-47b4-b70e-1fcaacc70ca1-kube-api-access-mgv7h\") pod \"control-plane-machine-set-operator-78cbb6b69f-g4cls\" (UID: \"fd93502e-13cb-47b4-b70e-1fcaacc70ca1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.097800 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.102063 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.108217 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsqqs\" (UniqueName: \"kubernetes.io/projected/a2b01ec2-7de4-41ae-9510-b66cf169dc5a-kube-api-access-vsqqs\") pod \"migrator-59844c95c7-ptdh2\" (UID: \"a2b01ec2-7de4-41ae-9510-b66cf169dc5a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.108554 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.115849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.121182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchh6\" (UniqueName: \"kubernetes.io/projected/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b-kube-api-access-dchh6\") pod \"auto-csr-approver-29555524-2x297\" (UID: \"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b\") " pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.125924 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.143013 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.147302 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.152380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.154405 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.654382361 +0000 UTC m=+100.618344705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.155634 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mpnwm"] Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.165000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2j6\" (UniqueName: \"kubernetes.io/projected/eb8fdd94-f8f5-4719-985f-a8f93ff536c7-kube-api-access-kf2j6\") pod \"service-ca-9c57cc56f-6fpz6\" (UID: \"eb8fdd94-f8f5-4719-985f-a8f93ff536c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.167727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2d6vc" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.171931 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbmgt\" (UniqueName: \"kubernetes.io/projected/f865d3a8-d05b-47c7-a131-31849e5d82ad-kube-api-access-vbmgt\") pod \"package-server-manager-789f6589d5-2dm8z\" (UID: \"f865d3a8-d05b-47c7-a131-31849e5d82ad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.193620 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rkt\" (UniqueName: \"kubernetes.io/projected/bfc2e833-0f84-45b3-8caa-a5ce8d7579a4-kube-api-access-57rkt\") pod \"ingress-canary-2kgqm\" (UID: \"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4\") " pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.198396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgnf\" (UniqueName: \"kubernetes.io/projected/235d7664-1ad8-4601-b279-1b8ff86f0bf7-kube-api-access-hkgnf\") pod \"catalog-operator-68c6474976-d9zff\" (UID: \"235d7664-1ad8-4601-b279-1b8ff86f0bf7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.219874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbtgz\" (UniqueName: \"kubernetes.io/projected/2ef74c4e-4360-451e-afe7-b17aa607dc79-kube-api-access-tbtgz\") pod \"machine-config-controller-84d6567774-jxvts\" (UID: \"2ef74c4e-4360-451e-afe7-b17aa607dc79\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.240447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" event={"ID":"b641df97-62aa-491f-88f2-b92a688e9854","Type":"ContainerStarted","Data":"3a58b216b5f3e5a46f496db9cb1fd04e8e80b9e8da9b9326bfd187251e867a80"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.240504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" event={"ID":"b641df97-62aa-491f-88f2-b92a688e9854","Type":"ContainerStarted","Data":"708a695b5e6b40ed33f05079b26a1c806d32a8e682edd2ff651a0e286e95b582"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.246996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q5pf\" (UniqueName: \"kubernetes.io/projected/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-kube-api-access-5q5pf\") pod \"collect-profiles-29555520-fxcc2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.253171 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.253447 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.753418307 +0000 UTC m=+100.717380651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.253627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.253782 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxmrf\" (UniqueName: \"kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.254006 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.753997812 +0000 UTC m=+100.717960156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.259123 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" event={"ID":"1fc8a11b-111f-4853-8227-7718374e75e7","Type":"ContainerStarted","Data":"00ab56b67efe0de0fd364376ee7e55b838ed874cb1916318a228193b915e8e99"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.261780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxmrf\" (UniqueName: \"kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.296751 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" event={"ID":"076c2fbb-88e7-4df6-92ae-d4919b66f7e4","Type":"ContainerStarted","Data":"7dfad60f47582a0fa3b32d4bfeba69c0fec79b19d9830066f0650802a972c0ba"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.304741 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.312052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" event={"ID":"e8e442fd-56a6-49e5-b34d-86331dab75f4","Type":"ContainerStarted","Data":"44f204779ac822a8fcc2f5dbfa78c7f5ca3b1d1889abcae46a36d806a0b7f1b6"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.312746 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2kgqm" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.322671 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.330257 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.335477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.336176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" event={"ID":"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb","Type":"ContainerStarted","Data":"6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.336215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" event={"ID":"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb","Type":"ContainerStarted","Data":"a32ba040757c2158a33477ea6e4d44f59c0229189575b97c1d63675b48dfc6bc"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.337216 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.350220 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.350306 4687 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q8mpl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.350362 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.350462 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" event={"ID":"076e302f-a426-407f-83bd-208837fd5d73","Type":"ContainerStarted","Data":"bd248d7013049054874e795c3dfc57607d9a410b82b0cb256ee0fef89272bbd8"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.350541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" event={"ID":"076e302f-a426-407f-83bd-208837fd5d73","Type":"ContainerStarted","Data":"50debc77444319aba19075cd4511ef8d6339d80f87d418da54785d7b6ffeb90e"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.354867 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355126 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355148 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.355220 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.855195327 +0000 UTC m=+100.819157671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355386 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355580 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.355626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.356659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-service-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.356872 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.357951 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-config\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.358032 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.857373255 +0000 UTC m=+100.821335599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.358183 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4296ad5-b919-42fb-964b-3e067a376385-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.358463 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5046d5fc-693f-47bc-bae2-c3430c7e6b24-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.360007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.364943 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.365013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fzgfd\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.367180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-55qf7" event={"ID":"fcbdc66f-c79c-4a57-a030-665f5320b182","Type":"ContainerStarted","Data":"402a799402a1ca727253a54ae7b6e6b2357127c4960de5bb4c7d0bd9d16ebfab"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.367246 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-55qf7" event={"ID":"fcbdc66f-c79c-4a57-a030-665f5320b182","Type":"ContainerStarted","Data":"e3957481493ae9143c4ae3a401d0f38d9599e77342a0db9d06f4eb1ac82a3792"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.368928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5046d5fc-693f-47bc-bae2-c3430c7e6b24-serving-cert\") pod \"authentication-operator-69f744f599-5hwpl\" (UID: \"5046d5fc-693f-47bc-bae2-c3430c7e6b24\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.375264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4296ad5-b919-42fb-964b-3e067a376385-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r67vt\" (UID: \"b4296ad5-b919-42fb-964b-3e067a376385\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.381493 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.386949 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.405867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" event={"ID":"1c66c344-67c8-40f9-9acd-384c4de62c77","Type":"ContainerStarted","Data":"a2cfdfa04368408f886e0f7a65d2cdada686f7ddc78d472dddd889b9f76b6807"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.405904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" event={"ID":"1c66c344-67c8-40f9-9acd-384c4de62c77","Type":"ContainerStarted","Data":"6f0fa0a73ec248b0e7b9168467581a5545abe40b603e1d2f2a27b7959c8fb7af"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.438047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4j4z" event={"ID":"a123cd77-f222-4d99-b77f-f11c6c323005","Type":"ContainerStarted","Data":"98478bd56cae521d0f23e55870a23b95030bd0bc00280da5e95baeea2c10cea8"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.439144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4j4z" event={"ID":"a123cd77-f222-4d99-b77f-f11c6c323005","Type":"ContainerStarted","Data":"bc3c9ba518809a99fb890d46b219b9c4fa49ffc34509418f8a16d9d875674fc4"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.458279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.460391 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:11.960339358 +0000 UTC m=+100.924301702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.467006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" event={"ID":"6a654f3a-4b2a-408c-87c1-908616a79eb3","Type":"ContainerStarted","Data":"c8c6cde971cc1eeff8aed2ca6d841735e0688e8f7fbc1115efd3f6d35bb3ac0a"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.470664 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" event={"ID":"96509adf-07e9-4d9a-926e-2c10cfc69b31","Type":"ContainerStarted","Data":"65b8ab60e9383d8650824bdfb80a784a1b490249d6056e093f66c1d37e9a8193"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.470695 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" event={"ID":"96509adf-07e9-4d9a-926e-2c10cfc69b31","Type":"ContainerStarted","Data":"7ab44579f4b94e6f4fb78d7929104d61f84832d956145ce1747bf7d61532f376"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.481788 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" event={"ID":"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe","Type":"ContainerStarted","Data":"4baab1dd0a0b0fc03c7a028f48604321ec4f893a91b34959f951173a8838c3f3"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.481829 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" event={"ID":"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe","Type":"ContainerStarted","Data":"35cb252c1aea299d330990d30701d2ecf64cb3b1af96cd9f1e1b3d05c107e288"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.482484 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.501295 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.501350 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.510808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-crmcv" event={"ID":"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8","Type":"ContainerStarted","Data":"f7b0b853c1333dc4fae6bb0a7fdfafa94ee5511f908f6305a91173e9883fe190"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.510850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-crmcv" event={"ID":"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8","Type":"ContainerStarted","Data":"f3805bbe40e74fb30dd2c1e00ad7d64b63bb05ae54c1d4ecc6aecdcdb8bb89c3"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.512143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.516031 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" event={"ID":"8ae2744a-ce14-428c-a7bb-91b83a738577","Type":"ContainerStarted","Data":"7c219235629ceaff2389cb5e0603518f5d486e75b8b5f8d3df16569ac41ac061"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.522871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" event={"ID":"3dd72ea9-e99c-4d99-914c-6984e54ee89f","Type":"ContainerStarted","Data":"00218a320a9ae6482132bdc5781910ced240d312b9b37545deade2be1bd8ff1c"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.522907 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" event={"ID":"3dd72ea9-e99c-4d99-914c-6984e54ee89f","Type":"ContainerStarted","Data":"0b3113b469c27261a7f273ebf0cfaf28c1e62ed66e81248f4c287ca8563aaa69"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.530712 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.530762 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.532592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" event={"ID":"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b","Type":"ContainerStarted","Data":"f97e7b82790efe8074bbf144307a8aa5256ec8259485c5d883a776cb649035f4"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.532635 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" event={"ID":"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b","Type":"ContainerStarted","Data":"b2c8bcedc678cef1a137f1d8102d9909ac2664c984bb36d474b3bd1da76d73aa"} Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.550014 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.551195 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tvj7q"] Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.561280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.563837 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.063821764 +0000 UTC m=+101.027784108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.597932 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.598558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.641547 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk"] Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.643024 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr"] Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.665234 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.666354 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.166325463 +0000 UTC m=+101.130287807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.770296 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.770876 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.270856137 +0000 UTC m=+101.234818481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.871719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.872050 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.372029951 +0000 UTC m=+101.335992295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:11 crc kubenswrapper[4687]: I0312 16:04:11.974739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:11 crc kubenswrapper[4687]: E0312 16:04:11.975388 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.475376313 +0000 UTC m=+101.439338657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.076054 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.076756 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.576735671 +0000 UTC m=+101.540698015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.194719 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.195224 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.695207963 +0000 UTC m=+101.659170307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.266279 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548"] Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.295663 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.296053 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.796021286 +0000 UTC m=+101.759983630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.339375 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.397402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.397806 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.897792905 +0000 UTC m=+101.861755249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.498614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.499035 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:12.999011421 +0000 UTC m=+101.962973765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.555530 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" event={"ID":"8ae2744a-ce14-428c-a7bb-91b83a738577","Type":"ContainerStarted","Data":"23daf672b46cd9c5c2d5bed938a1e732408ddf256643d8889c24dad0feda57b0"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.597554 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerID="00218a320a9ae6482132bdc5781910ced240d312b9b37545deade2be1bd8ff1c" exitCode=0 Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.597645 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" event={"ID":"3dd72ea9-e99c-4d99-914c-6984e54ee89f","Type":"ContainerDied","Data":"00218a320a9ae6482132bdc5781910ced240d312b9b37545deade2be1bd8ff1c"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.597670 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" event={"ID":"3dd72ea9-e99c-4d99-914c-6984e54ee89f","Type":"ContainerStarted","Data":"7ccb120aeebfe27d31cad2d606816a1479f4c9372a9527cc60301cc3f2f78534"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.605124 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.606470 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.613588 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.113570296 +0000 UTC m=+102.077532630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.626702 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:12 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:12 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:12 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.626741 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.643677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" event={"ID":"be3e1335-148c-422f-82e3-5167ab0990fc","Type":"ContainerStarted","Data":"786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.643722 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" event={"ID":"be3e1335-148c-422f-82e3-5167ab0990fc","Type":"ContainerStarted","Data":"5e938e29ecf96411c5576b957e16e8f5f42e22b55b85065f07775f810d8fc842"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.648307 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.652058 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" event={"ID":"c3fbeb5c-442a-4f3b-ad4e-2f53ecd9197b","Type":"ContainerStarted","Data":"88f4fd16c1ee3e21fee9ab481c5f715205fd909884722a3a0cfcd41e6dbdb607"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.661251 4687 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w8zkr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.661301 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" podUID="be3e1335-148c-422f-82e3-5167ab0990fc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.665122 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" event={"ID":"8e75a98a-387a-4b8e-9b01-dc788830f478","Type":"ContainerStarted","Data":"44eda4d272c0e4545dcbe0e680a511bb296bdfc1a75a8f9efa95d55ab1ff2f6c"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.665564 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.669145 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" event={"ID":"e075a2cc-590a-4ac7-a6d5-c02336912013","Type":"ContainerStarted","Data":"25c040c6f88ecfd86f4beeac663f0aad3b100f091f36195038d44913f5c2f563"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.669182 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" event={"ID":"e075a2cc-590a-4ac7-a6d5-c02336912013","Type":"ContainerStarted","Data":"cd6865cec53dd2a2da7052d4e8990514cc21b9f96230d74bba2f310a4264bf82"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.673603 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2d6vc" event={"ID":"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979","Type":"ContainerStarted","Data":"ed0c9989410cdf86fb347576f0e53cd128439ee04dab9ea4275571922c2955f3"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.673751 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2d6vc" event={"ID":"3ad7feb5-e16a-43c8-9b7e-77d03f7d5979","Type":"ContainerStarted","Data":"7866e767cf42356d03f3c2c2fb10ff6254b6e572d8f1c68692bdf9efa979def2"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.685652 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" event={"ID":"076c2fbb-88e7-4df6-92ae-d4919b66f7e4","Type":"ContainerStarted","Data":"c5a8989f00f2ddaa09787e11c96d980b2e1bbda2f847e69a08c7ae8c19e61702"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.685726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" event={"ID":"076c2fbb-88e7-4df6-92ae-d4919b66f7e4","Type":"ContainerStarted","Data":"ef6cbed0d0b8c5e6b35e899d65a875830f8a84ff5223b5ca00894a0f1a0f2fba"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.720056 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.721189 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.221170042 +0000 UTC m=+102.185132396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.758984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" event={"ID":"1fc8a11b-111f-4853-8227-7718374e75e7","Type":"ContainerStarted","Data":"daf222bf894dadcf31274a978f7a472aa627a7af36aa4f5b8b01c5858fb0d5a4"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.774852 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" event={"ID":"d308d912-7b47-4156-95f8-db78dc53a205","Type":"ContainerStarted","Data":"b2d01c9e03e64ee69975f5b4646bff541eb7698e3b12693cf8e23563eaa4dee2"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.797320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" event={"ID":"e8e442fd-56a6-49e5-b34d-86331dab75f4","Type":"ContainerStarted","Data":"777fe5f15fa2c0f4a06f4b1be7965cb0f58234185088b75c20279959c51d54f6"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.824760 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.829051 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.329036017 +0000 UTC m=+102.292998361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.835036 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mpnwm" event={"ID":"40314ee9-1a5d-4917-85eb-73c6959fee43","Type":"ContainerStarted","Data":"f759567c5aa0c01db30eb40757a0eec04ef53c4bd281deaf5f46a41c13c85e10"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.835086 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mpnwm" event={"ID":"40314ee9-1a5d-4917-85eb-73c6959fee43","Type":"ContainerStarted","Data":"e4f507f93172cd7664e06b5a1d8616e16c7acdd5a24a6266f91aa06d0c053253"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.888084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" event={"ID":"b641df97-62aa-491f-88f2-b92a688e9854","Type":"ContainerStarted","Data":"a660e8e6628dfe390ba5ef9c67d6e5a0334fcaec9ccf2a4c38037e5a58c86354"} Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.894554 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.894615 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.928154 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.928594 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.929028 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 16:04:12 crc kubenswrapper[4687]: E0312 16:04:12.929433 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.429412249 +0000 UTC m=+102.393374593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:12 crc kubenswrapper[4687]: I0312 16:04:12.984617 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-l4j4z" podStartSLOduration=65.984581919 podStartE2EDuration="1m5.984581919s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:12.981004573 +0000 UTC m=+101.944966917" watchObservedRunningTime="2026-03-12 16:04:12.984581919 +0000 UTC m=+101.948544283" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.013749 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-b6bc4" podStartSLOduration=66.013733367 podStartE2EDuration="1m6.013733367s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.009874843 +0000 UTC m=+101.973837197" watchObservedRunningTime="2026-03-12 16:04:13.013733367 +0000 UTC m=+101.977695711" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.030682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.034994 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.534974311 +0000 UTC m=+102.498936845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.054696 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-55qf7" podStartSLOduration=66.054675034 podStartE2EDuration="1m6.054675034s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.053348488 +0000 UTC m=+102.017310832" watchObservedRunningTime="2026-03-12 16:04:13.054675034 +0000 UTC m=+102.018637378" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.136830 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" podStartSLOduration=66.136813472 podStartE2EDuration="1m6.136813472s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.091681383 +0000 UTC m=+102.055643727" watchObservedRunningTime="2026-03-12 16:04:13.136813472 +0000 UTC m=+102.100775816" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.137414 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z5jf8" podStartSLOduration=67.137408849 podStartE2EDuration="1m7.137408849s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.135764374 +0000 UTC m=+102.099726728" watchObservedRunningTime="2026-03-12 16:04:13.137408849 +0000 UTC m=+102.101371193" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.139554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.139970 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.639952017 +0000 UTC m=+102.603914361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.182729 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" podStartSLOduration=67.182712333 podStartE2EDuration="1m7.182712333s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.181106039 +0000 UTC m=+102.145068383" watchObservedRunningTime="2026-03-12 16:04:13.182712333 +0000 UTC m=+102.146674677" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.218317 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9zhk9" podStartSLOduration=66.218301244 podStartE2EDuration="1m6.218301244s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.217838882 +0000 UTC m=+102.181801236" watchObservedRunningTime="2026-03-12 16:04:13.218301244 +0000 UTC m=+102.182263588" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.241674 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.241978 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.741964953 +0000 UTC m=+102.705927297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.255231 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podStartSLOduration=66.255214562 podStartE2EDuration="1m6.255214562s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.254211585 +0000 UTC m=+102.218173939" watchObservedRunningTime="2026-03-12 16:04:13.255214562 +0000 UTC m=+102.219176906" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.283056 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2d6vc" podStartSLOduration=6.283041264 podStartE2EDuration="6.283041264s" podCreationTimestamp="2026-03-12 16:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.282325544 +0000 UTC m=+102.246287888" watchObservedRunningTime="2026-03-12 16:04:13.283041264 +0000 UTC m=+102.247003608" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.293078 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.340736 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podStartSLOduration=66.340722572 podStartE2EDuration="1m6.340722572s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.340015492 +0000 UTC m=+102.303977836" watchObservedRunningTime="2026-03-12 16:04:13.340722572 +0000 UTC m=+102.304684906" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.342332 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.342666 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.842648804 +0000 UTC m=+102.806611148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.358849 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:13 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:13 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:13 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.358898 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.411909 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2w2g2" podStartSLOduration=66.411873275 podStartE2EDuration="1m6.411873275s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.400759724 +0000 UTC m=+102.364722068" watchObservedRunningTime="2026-03-12 16:04:13.411873275 +0000 UTC m=+102.375835619" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.417917 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.426012 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5r45"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.441666 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38706: no serving certificate available for the kubelet" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.458765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.459220 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:13.959206633 +0000 UTC m=+102.923168977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.480338 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.481438 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555524-2x297"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.524608 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.527681 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38710: no serving certificate available for the kubelet" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.533569 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dtxpv"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.540081 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tpmtb"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.548500 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:04:13 crc kubenswrapper[4687]: W0312 16:04:13.559533 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf885e731_aeb8_4832_a0f4_fffe5c762592.slice/crio-5379748769f1c987680b043bc857ef265624fce460694afbfabfcc869595cbae WatchSource:0}: Error finding container 5379748769f1c987680b043bc857ef265624fce460694afbfabfcc869595cbae: Status 404 returned error can't find the container with id 5379748769f1c987680b043bc857ef265624fce460694afbfabfcc869595cbae Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.565945 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.566472 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.066451361 +0000 UTC m=+103.030413705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.586828 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.605519 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lmlgb"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.605831 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4n7wm" podStartSLOduration=66.605814034 podStartE2EDuration="1m6.605814034s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.453275833 +0000 UTC m=+102.417238177" watchObservedRunningTime="2026-03-12 16:04:13.605814034 +0000 UTC m=+102.569776378" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.626753 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" podStartSLOduration=66.626733119 podStartE2EDuration="1m6.626733119s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.49980651 +0000 UTC m=+102.463768844" watchObservedRunningTime="2026-03-12 16:04:13.626733119 +0000 UTC m=+102.590695453" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.628962 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38722: no serving certificate available for the kubelet" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.629048 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.629088 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.641840 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6fpz6"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.649198 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" podStartSLOduration=6.649175956 podStartE2EDuration="6.649175956s" podCreationTimestamp="2026-03-12 16:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.530629602 +0000 UTC m=+102.494591966" watchObservedRunningTime="2026-03-12 16:04:13.649175956 +0000 UTC m=+102.613138300" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.651823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.655404 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.658466 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fzgfd"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.660281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.669245 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.669588 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.169575097 +0000 UTC m=+103.133537441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.676384 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.677561 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5hwpl"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.679945 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.702006 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-crmcv" podStartSLOduration=66.701986913 podStartE2EDuration="1m6.701986913s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.642995599 +0000 UTC m=+102.606957943" watchObservedRunningTime="2026-03-12 16:04:13.701986913 +0000 UTC m=+102.665949257" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.703097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.703151 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2kgqm"] Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.729598 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d8928" podStartSLOduration=67.729542278 podStartE2EDuration="1m7.729542278s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.708972862 +0000 UTC m=+102.672935206" watchObservedRunningTime="2026-03-12 16:04:13.729542278 +0000 UTC m=+102.693504622" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.730553 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt"] Mar 12 16:04:13 crc kubenswrapper[4687]: W0312 16:04:13.742851 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod235d7664_1ad8_4601_b279_1b8ff86f0bf7.slice/crio-1fdef059fa409117c6a0292305f531364ce61d4a0e1c54a916bc5c47c0007d99 WatchSource:0}: Error finding container 1fdef059fa409117c6a0292305f531364ce61d4a0e1c54a916bc5c47c0007d99: Status 404 returned error can't find the container with id 1fdef059fa409117c6a0292305f531364ce61d4a0e1c54a916bc5c47c0007d99 Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.753766 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" podStartSLOduration=66.753748371 podStartE2EDuration="1m6.753748371s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.749720872 +0000 UTC m=+102.713683226" watchObservedRunningTime="2026-03-12 16:04:13.753748371 +0000 UTC m=+102.717710715" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.763703 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38732: no serving certificate available for the kubelet" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.772391 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.772822 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.272805436 +0000 UTC m=+103.236767780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: W0312 16:04:13.773896 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43459f1d_71b0_484e_9944_73600cec2685.slice/crio-2d1787c20dcd7d67033b3acf123b41520e7a3c2382c51ee28ee39c48132943f8 WatchSource:0}: Error finding container 2d1787c20dcd7d67033b3acf123b41520e7a3c2382c51ee28ee39c48132943f8: Status 404 returned error can't find the container with id 2d1787c20dcd7d67033b3acf123b41520e7a3c2382c51ee28ee39c48132943f8 Mar 12 16:04:13 crc kubenswrapper[4687]: W0312 16:04:13.784996 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5046d5fc_693f_47bc_bae2_c3430c7e6b24.slice/crio-62ce385f7c6d50832de326ce7df863126551abae53647cc878b1ea5f79284552 WatchSource:0}: Error finding container 62ce385f7c6d50832de326ce7df863126551abae53647cc878b1ea5f79284552: Status 404 returned error can't find the container with id 62ce385f7c6d50832de326ce7df863126551abae53647cc878b1ea5f79284552 Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.800133 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fzz7k" podStartSLOduration=66.800118594 podStartE2EDuration="1m6.800118594s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.798465879 +0000 UTC m=+102.762428223" watchObservedRunningTime="2026-03-12 16:04:13.800118594 +0000 UTC m=+102.764080928" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.833812 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" podStartSLOduration=67.833793074 podStartE2EDuration="1m7.833793074s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:13.832975381 +0000 UTC m=+102.796937725" watchObservedRunningTime="2026-03-12 16:04:13.833793074 +0000 UTC m=+102.797755418" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.875012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.875653 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.375635804 +0000 UTC m=+103.339598148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.914429 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" event={"ID":"eb8fdd94-f8f5-4719-985f-a8f93ff536c7","Type":"ContainerStarted","Data":"23d79b2b73028525ed8425dc7b254a1fa9246b9676f8d3f7aab082dd3ba86de8"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.915453 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" event={"ID":"5046d5fc-693f-47bc-bae2-c3430c7e6b24","Type":"ContainerStarted","Data":"62ce385f7c6d50832de326ce7df863126551abae53647cc878b1ea5f79284552"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.920428 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38744: no serving certificate available for the kubelet" Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.932566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b79zk" event={"ID":"d308d912-7b47-4156-95f8-db78dc53a205","Type":"ContainerStarted","Data":"2513b6050da8f60b78d0177c8761b94f96bdcc237d4a474618cc1ccf70dea69a"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.944144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" event={"ID":"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2","Type":"ContainerStarted","Data":"bae00bf8f9e71811367ef40e09e7125f32e3a8edb60034c88b2e66c26373fa7c"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.944285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" event={"ID":"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2","Type":"ContainerStarted","Data":"5e072bd429155c944c5fdc472e08172aceb9d7e5c0d78311d8b55b29009c34df"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.946459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" event={"ID":"f4b69a64-a9c5-41e6-81f6-15149754f232","Type":"ContainerStarted","Data":"17eec64af3a2bc7d0623b264673fde03237259c97631606ae22c12a6f3b51004"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.951797 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" event={"ID":"0cf80ba9-ad15-48b8-ae26-f734183c8d30","Type":"ContainerStarted","Data":"f3864b23019c544db1c0e9f3d09634a63335e664770d5941b5e212e6090701e2"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.957459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" event={"ID":"e075a2cc-590a-4ac7-a6d5-c02336912013","Type":"ContainerStarted","Data":"9bcb81a2bbf3b0466239cee28a352a81adc687824f58aaea2675b0e9d6f0ec50"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.976257 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:13 crc kubenswrapper[4687]: E0312 16:04:13.976697 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.476675304 +0000 UTC m=+103.440637648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.985978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" event={"ID":"79db3510-528b-4ead-a358-09f79a850a5c","Type":"ContainerStarted","Data":"a850043a7dea9d65aa35ff4d077150ac9ad21a055a0bd0271c004293d6a19084"} Mar 12 16:04:13 crc kubenswrapper[4687]: I0312 16:04:13.993537 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2kgqm" event={"ID":"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4","Type":"ContainerStarted","Data":"a89ffe3440ad2c7937bb4ee7475211580d2e2e87722a98bf4c02e0c0b8c0644d"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.035234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" event={"ID":"8e75a98a-387a-4b8e-9b01-dc788830f478","Type":"ContainerStarted","Data":"99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.047825 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" event={"ID":"235d7664-1ad8-4601-b279-1b8ff86f0bf7","Type":"ContainerStarted","Data":"1fdef059fa409117c6a0292305f531364ce61d4a0e1c54a916bc5c47c0007d99"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.052022 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38748: no serving certificate available for the kubelet" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.066572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" event={"ID":"2ef74c4e-4360-451e-afe7-b17aa607dc79","Type":"ContainerStarted","Data":"3752e6e4ef3f02a59950caa3cb97632664f7f012c6921b2dd84d53cee115dc53"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.078153 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.079480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mpnwm" event={"ID":"40314ee9-1a5d-4917-85eb-73c6959fee43","Type":"ContainerStarted","Data":"e738e59949254236a7a5c9b0cb9a39147a15673b472a091bf7fbaaef51dc841f"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.079977 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.080341 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.580328375 +0000 UTC m=+103.544290719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.081969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" event={"ID":"fd93502e-13cb-47b4-b70e-1fcaacc70ca1","Type":"ContainerStarted","Data":"d7a7227b7e397b2ac93f5d89d6e1f4883bf6e09f5b950bbdcdcc5869a5f37c52"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.084320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" event={"ID":"8dd77a62-4c2f-409c-af2a-bac22eb53ddc","Type":"ContainerStarted","Data":"cf9596c852cd6aaa695559bf7df46289adc3a7a5797d706c4aad529f14f76a59"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.084457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" event={"ID":"8dd77a62-4c2f-409c-af2a-bac22eb53ddc","Type":"ContainerStarted","Data":"6c35569cc712bb1fdcbe76b84e5f6d2e43373e06e9d853ddf6e91d233dc1cb42"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.085133 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.086402 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" event={"ID":"0e4943d4-469b-431d-a905-753b44ac01e3","Type":"ContainerStarted","Data":"bea70433e4ef6e056a16fb3d502ad2eabe62dfa6289d5554ee92e3167bc1119c"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.087351 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.087446 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.087663 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.090778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" event={"ID":"1fc8a11b-111f-4853-8227-7718374e75e7","Type":"ContainerStarted","Data":"7060a4a22cd95a09336831b24366ddf3a8dadceafeaf010a3d70afcaabaf20e7"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.094091 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fzgfd"] Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.116235 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"491eb34638170986d84d0c3610210bbe9bd1eb66ccc95a63200b32effdbee618"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.124929 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" event={"ID":"f865d3a8-d05b-47c7-a131-31849e5d82ad","Type":"ContainerStarted","Data":"1e829217934a5d41f413cb69bfbd017bebb6438bd32b90144154c9c8154e8311"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.133093 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr"] Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.134992 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555524-2x297" event={"ID":"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b","Type":"ContainerStarted","Data":"add797611e7ddb663a31570f338b39a08edb582dca475232cdccd470a00b5b99"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.156529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" event={"ID":"54b72843-9c3b-48ea-b74c-5a8b0872e66d","Type":"ContainerStarted","Data":"1c3123c62c6b60f3af141379b020d8a9b9fe25f1ec589dc3ee5d6b01db45351f"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.156584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" event={"ID":"54b72843-9c3b-48ea-b74c-5a8b0872e66d","Type":"ContainerStarted","Data":"fe73979402742d248fad368f2b0448bf0751790f4585d8f68ae1f16e783f460b"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.156818 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.178857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.180037 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.680009628 +0000 UTC m=+103.643972002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.185875 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.185930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.188539 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" event={"ID":"b4296ad5-b919-42fb-964b-3e067a376385","Type":"ContainerStarted","Data":"ff331757a4ffb4c4122020d2d4714a4e8d09abbe9c8a54b8f10f96bf53e110e0"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.194708 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.199004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" event={"ID":"dc8bc5a5-2813-4313-b5c9-268a6677e889","Type":"ContainerStarted","Data":"0788c5c78b6a4de1a8eec0c9c1aad7158fe4f304b10d8c9b50ed50d29508c5e2"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.199474 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g4lgw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]log ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]etcd ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/max-in-flight-filter ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 12 16:04:14 crc kubenswrapper[4687]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 12 16:04:14 crc kubenswrapper[4687]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/project.openshift.io-projectcache ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-startinformers ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 12 16:04:14 crc kubenswrapper[4687]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 16:04:14 crc kubenswrapper[4687]: livez check failed Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.199542 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" podUID="e8e442fd-56a6-49e5-b34d-86331dab75f4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.202263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" event={"ID":"f885e731-aeb8-4832-a0f4-fffe5c762592","Type":"ContainerStarted","Data":"5379748769f1c987680b043bc857ef265624fce460694afbfabfcc869595cbae"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.224493 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" event={"ID":"a2b01ec2-7de4-41ae-9510-b66cf169dc5a","Type":"ContainerStarted","Data":"bf968083829c2e681eece0e9050e16ab63be2402425c3b649d4fcb8eb0c14c40"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.234164 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" event={"ID":"43459f1d-71b0-484e-9944-73600cec2685","Type":"ContainerStarted","Data":"2d1787c20dcd7d67033b3acf123b41520e7a3c2382c51ee28ee39c48132943f8"} Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.243637 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.255506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.261566 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38752: no serving certificate available for the kubelet" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.267120 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.280401 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.282471 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.782457325 +0000 UTC m=+103.746419669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.346821 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:14 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:14 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:14 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.347050 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.381034 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.382257 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.882234952 +0000 UTC m=+103.846197296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.482528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.482858 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:14.98284776 +0000 UTC m=+103.946810104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.585006 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.585614 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.085594326 +0000 UTC m=+104.049556670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.629471 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38766: no serving certificate available for the kubelet" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.688348 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.688935 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.188916608 +0000 UTC m=+104.152878952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.745166 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" podStartSLOduration=67.745150967 podStartE2EDuration="1m7.745150967s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:14.704456518 +0000 UTC m=+103.668418862" watchObservedRunningTime="2026-03-12 16:04:14.745150967 +0000 UTC m=+103.709113311" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.789448 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.789769 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.289740222 +0000 UTC m=+104.253702566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.790018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.790730 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.290714568 +0000 UTC m=+104.254676902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.795693 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mpnwm" podStartSLOduration=7.795680152 podStartE2EDuration="7.795680152s" podCreationTimestamp="2026-03-12 16:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:14.764182722 +0000 UTC m=+103.728145056" watchObservedRunningTime="2026-03-12 16:04:14.795680152 +0000 UTC m=+103.759642496" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.839819 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sqhmx" podStartSLOduration=67.839796195 podStartE2EDuration="1m7.839796195s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:14.799608569 +0000 UTC m=+103.763570913" watchObservedRunningTime="2026-03-12 16:04:14.839796195 +0000 UTC m=+103.803758539" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.840416 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tvj7q" podStartSLOduration=67.840406721 podStartE2EDuration="1m7.840406721s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:14.837939444 +0000 UTC m=+103.801901798" watchObservedRunningTime="2026-03-12 16:04:14.840406721 +0000 UTC m=+103.804369065" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.871980 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podStartSLOduration=67.871964043 podStartE2EDuration="1m7.871964043s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:14.869935628 +0000 UTC m=+103.833897962" watchObservedRunningTime="2026-03-12 16:04:14.871964043 +0000 UTC m=+103.835926387" Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.891041 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.891453 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.391433419 +0000 UTC m=+104.355395753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:14 crc kubenswrapper[4687]: I0312 16:04:14.992214 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:14 crc kubenswrapper[4687]: E0312 16:04:14.992677 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.492665385 +0000 UTC m=+104.456627719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.094993 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.096035 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.596009727 +0000 UTC m=+104.559972071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.196989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.197396 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.697383386 +0000 UTC m=+104.661345730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.222015 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wn49x"] Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.245446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" event={"ID":"0e4943d4-469b-431d-a905-753b44ac01e3","Type":"ContainerStarted","Data":"65af1e4d012dba0c3d92999688a73ea72aaf909e698f6679f0c9087fcee7944d"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.248403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" event={"ID":"b4296ad5-b919-42fb-964b-3e067a376385","Type":"ContainerStarted","Data":"b15d55cb8cf94c3b5326a97be3f6411632a82079d12e1e1dca629f5fd020d6db"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.260064 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" event={"ID":"dc8bc5a5-2813-4313-b5c9-268a6677e889","Type":"ContainerStarted","Data":"bd3fc6f86b4b5feacd28010284c54be79052ecff2aef8decdeaeebe847e140dc"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.260106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" event={"ID":"dc8bc5a5-2813-4313-b5c9-268a6677e889","Type":"ContainerStarted","Data":"27117ebe16679c263a59d4621a3b513447ab94f9deffd47c874b985ee4164173"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.272932 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6s68k" podStartSLOduration=68.272919257 podStartE2EDuration="1m8.272919257s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.271167329 +0000 UTC m=+104.235129673" watchObservedRunningTime="2026-03-12 16:04:15.272919257 +0000 UTC m=+104.236881601" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.292125 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" event={"ID":"f885e731-aeb8-4832-a0f4-fffe5c762592","Type":"ContainerStarted","Data":"9b001efeb858c761254f0edd2e6f7dae6dd33ffea0af6da74d1caff340db7166"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.299825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.300145 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.800128331 +0000 UTC m=+104.764090675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.300897 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m88f" podStartSLOduration=68.300885652 podStartE2EDuration="1m8.300885652s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.299230698 +0000 UTC m=+104.263193032" watchObservedRunningTime="2026-03-12 16:04:15.300885652 +0000 UTC m=+104.264847996" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.329283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"1abb5a0d54b93d0d6e19e379d5ec06228118f4f5f61129efa50204f956096060"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.347013 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:15 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:15 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:15 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.347092 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.351828 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" event={"ID":"a2b01ec2-7de4-41ae-9510-b66cf169dc5a","Type":"ContainerStarted","Data":"1d8852dd4320696b62c973d212d884afae678211e243292f163336762ad5bc1e"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.351880 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" event={"ID":"a2b01ec2-7de4-41ae-9510-b66cf169dc5a","Type":"ContainerStarted","Data":"fd69eb14dbb6b054863bcc22c25c0760dad0a3effea01360a508e5b2e03971cf"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.377899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" event={"ID":"2ef74c4e-4360-451e-afe7-b17aa607dc79","Type":"ContainerStarted","Data":"5019b077eec78e0e19a41786b7aab90fc076c2cb721ce485c6f39af7fecd606b"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.377955 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" event={"ID":"2ef74c4e-4360-451e-afe7-b17aa607dc79","Type":"ContainerStarted","Data":"7f448160eba4edd2d838b7222654022071827cb4e3a1655a64ae6b75c4cc6078"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.384183 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r67vt" podStartSLOduration=69.384165752 podStartE2EDuration="1m9.384165752s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.382886077 +0000 UTC m=+104.346848431" watchObservedRunningTime="2026-03-12 16:04:15.384165752 +0000 UTC m=+104.348128096" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.401087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" event={"ID":"79db3510-528b-4ead-a358-09f79a850a5c","Type":"ContainerStarted","Data":"823040d988a25f507734d539c3f6504ab3216440d7626ab1d4d37c327e10d01b"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.401161 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.401898 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:15.901883691 +0000 UTC m=+104.865846035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.407927 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jxvts" podStartSLOduration=68.407909143 podStartE2EDuration="1m8.407909143s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.406855466 +0000 UTC m=+104.370817810" watchObservedRunningTime="2026-03-12 16:04:15.407909143 +0000 UTC m=+104.371871487" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.416607 4687 ???:1] "http: TLS handshake error from 192.168.126.11:38778: no serving certificate available for the kubelet" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.427098 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ptdh2" podStartSLOduration=68.427082822 podStartE2EDuration="1m8.427082822s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.422355594 +0000 UTC m=+104.386317938" watchObservedRunningTime="2026-03-12 16:04:15.427082822 +0000 UTC m=+104.391045166" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.446961 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" event={"ID":"235d7664-1ad8-4601-b279-1b8ff86f0bf7","Type":"ContainerStarted","Data":"1d48dce0f8ab7cbc5dadae8209ab1312fd6aa0ddd107c12a9e4236abfbf451e3"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.447805 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.451673 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.451708 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.460010 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5r45" podStartSLOduration=68.45999091 podStartE2EDuration="1m8.45999091s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.459343253 +0000 UTC m=+104.423305597" watchObservedRunningTime="2026-03-12 16:04:15.45999091 +0000 UTC m=+104.423953254" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.473622 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" event={"ID":"5046d5fc-693f-47bc-bae2-c3430c7e6b24","Type":"ContainerStarted","Data":"2b22319cb9c7470250fb1e55c8c0bbae56ff74c821fe846297bfda68bc1e516a"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.488728 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podStartSLOduration=68.488714777 podStartE2EDuration="1m8.488714777s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.485781008 +0000 UTC m=+104.449743342" watchObservedRunningTime="2026-03-12 16:04:15.488714777 +0000 UTC m=+104.452677121" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.489550 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2kgqm" event={"ID":"bfc2e833-0f84-45b3-8caa-a5ce8d7579a4","Type":"ContainerStarted","Data":"b12d08cf0e3d888ba4cea5e082eda742f6c9446e2f0d84d2f0950bcf3a3d0256"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.491653 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" event={"ID":"fd93502e-13cb-47b4-b70e-1fcaacc70ca1","Type":"ContainerStarted","Data":"b2e8864807f6ff8a8335834c15adf763f868159dadf852d98719d9520aac3cc4"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.502671 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" event={"ID":"f4b69a64-a9c5-41e6-81f6-15149754f232","Type":"ContainerStarted","Data":"a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.502822 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" podUID="f4b69a64-a9c5-41e6-81f6-15149754f232" containerName="controller-manager" containerID="cri-o://a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701" gracePeriod=30 Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.503287 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.503696 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.503893 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.003862766 +0000 UTC m=+104.967825100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.504046 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.504409 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.00439986 +0000 UTC m=+104.968362204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.506920 4687 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fzgfd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.506958 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" podUID="f4b69a64-a9c5-41e6-81f6-15149754f232" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.512006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" event={"ID":"0cf80ba9-ad15-48b8-ae26-f734183c8d30","Type":"ContainerStarted","Data":"fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.512919 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.515527 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lmlgb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.515573 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" podUID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.517322 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" podStartSLOduration=69.517311479 podStartE2EDuration="1m9.517311479s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.515995353 +0000 UTC m=+104.479957697" watchObservedRunningTime="2026-03-12 16:04:15.517311479 +0000 UTC m=+104.481273823" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.545332 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" event={"ID":"eb8fdd94-f8f5-4719-985f-a8f93ff536c7","Type":"ContainerStarted","Data":"6d9703a1b4dc52d00ceaded41ecadffa89c796b90f1dddc11e35410fe554f1d2"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.550109 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2kgqm" podStartSLOduration=8.550097395 podStartE2EDuration="8.550097395s" podCreationTimestamp="2026-03-12 16:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.548685157 +0000 UTC m=+104.512647501" watchObservedRunningTime="2026-03-12 16:04:15.550097395 +0000 UTC m=+104.514059739" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.566391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" event={"ID":"f865d3a8-d05b-47c7-a131-31849e5d82ad","Type":"ContainerStarted","Data":"b56384eb952f96a537f2ae1da600aba3a28f63a5470f329822f66cf329002e86"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.566443 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" event={"ID":"f865d3a8-d05b-47c7-a131-31849e5d82ad","Type":"ContainerStarted","Data":"eb69e943162e5be82936fdd615a53978344139af05914824eeba9b3c63002848"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.566615 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.576748 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" podStartSLOduration=68.576731855 podStartE2EDuration="1m8.576731855s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.574419432 +0000 UTC m=+104.538381776" watchObservedRunningTime="2026-03-12 16:04:15.576731855 +0000 UTC m=+104.540694199" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.598682 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g4cls" podStartSLOduration=68.598666958 podStartE2EDuration="1m8.598666958s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.598277557 +0000 UTC m=+104.562239901" watchObservedRunningTime="2026-03-12 16:04:15.598666958 +0000 UTC m=+104.562629302" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.600385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" event={"ID":"43459f1d-71b0-484e-9944-73600cec2685","Type":"ContainerStarted","Data":"a3656448ff21566153f7bd9df40812451eb3f9f86042457151d234df426260e2"} Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.601102 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" podUID="be3e1335-148c-422f-82e3-5167ab0990fc" containerName="route-controller-manager" containerID="cri-o://786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf" gracePeriod=30 Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.605439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.607620 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.107604099 +0000 UTC m=+105.071566443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.635694 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" podStartSLOduration=68.635678707 podStartE2EDuration="1m8.635678707s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.631652379 +0000 UTC m=+104.595614723" watchObservedRunningTime="2026-03-12 16:04:15.635678707 +0000 UTC m=+104.599641041" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.672249 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podStartSLOduration=68.672229195 podStartE2EDuration="1m8.672229195s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.670324694 +0000 UTC m=+104.634287058" watchObservedRunningTime="2026-03-12 16:04:15.672229195 +0000 UTC m=+104.636191539" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.708980 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" podStartSLOduration=69.708962517 podStartE2EDuration="1m9.708962517s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.691055533 +0000 UTC m=+104.655017877" watchObservedRunningTime="2026-03-12 16:04:15.708962517 +0000 UTC m=+104.672924861" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.710473 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4g2zm" podStartSLOduration=68.710467618 podStartE2EDuration="1m8.710467618s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.707613971 +0000 UTC m=+104.671576315" watchObservedRunningTime="2026-03-12 16:04:15.710467618 +0000 UTC m=+104.674429962" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.711924 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.715321 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.215305589 +0000 UTC m=+105.179267933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.770521 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.820393 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.820917 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.320894291 +0000 UTC m=+105.284856635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.832977 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6fpz6" podStartSLOduration=68.832955548 podStartE2EDuration="1m8.832955548s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:15.735742271 +0000 UTC m=+104.699704625" watchObservedRunningTime="2026-03-12 16:04:15.832955548 +0000 UTC m=+104.796917892" Mar 12 16:04:15 crc kubenswrapper[4687]: I0312 16:04:15.922688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:15 crc kubenswrapper[4687]: E0312 16:04:15.923094 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.423076162 +0000 UTC m=+105.387038496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.024067 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.024353 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.524336069 +0000 UTC m=+105.488298413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.125648 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.125697 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.125721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.125742 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.125772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.125799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.126844 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.626826678 +0000 UTC m=+105.590789012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.128580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.168857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb883751-bda8-4227-99fe-74d0b85cff17-metrics-certs\") pod \"network-metrics-daemon-d4g6l\" (UID: \"bb883751-bda8-4227-99fe-74d0b85cff17\") " pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.168864 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.169512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.169551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.212317 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-fzgfd_f4b69a64-a9c5-41e6-81f6-15149754f232/controller-manager/0.log" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.212416 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.223570 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.226872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.227242 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.72722524 +0000 UTC m=+105.691187584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.242007 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fc66cf755-48pv9"] Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.242191 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3e1335-148c-422f-82e3-5167ab0990fc" containerName="route-controller-manager" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.242204 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3e1335-148c-422f-82e3-5167ab0990fc" containerName="route-controller-manager" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.242224 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b69a64-a9c5-41e6-81f6-15149754f232" containerName="controller-manager" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.242232 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b69a64-a9c5-41e6-81f6-15149754f232" containerName="controller-manager" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.242310 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3e1335-148c-422f-82e3-5167ab0990fc" containerName="route-controller-manager" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.242322 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b69a64-a9c5-41e6-81f6-15149754f232" containerName="controller-manager" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.242648 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.268592 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc66cf755-48pv9"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.274646 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d4g6l" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") pod \"f4b69a64-a9c5-41e6-81f6-15149754f232\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") pod \"f4b69a64-a9c5-41e6-81f6-15149754f232\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328345 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3e1335-148c-422f-82e3-5167ab0990fc-serving-cert\") pod \"be3e1335-148c-422f-82e3-5167ab0990fc\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328389 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxmrf\" (UniqueName: \"kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf\") pod \"f4b69a64-a9c5-41e6-81f6-15149754f232\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328414 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert\") pod \"f4b69a64-a9c5-41e6-81f6-15149754f232\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328432 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") pod \"f4b69a64-a9c5-41e6-81f6-15149754f232\" (UID: \"f4b69a64-a9c5-41e6-81f6-15149754f232\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328459 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqp5c\" (UniqueName: \"kubernetes.io/projected/be3e1335-148c-422f-82e3-5167ab0990fc-kube-api-access-jqp5c\") pod \"be3e1335-148c-422f-82e3-5167ab0990fc\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328478 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-config\") pod \"be3e1335-148c-422f-82e3-5167ab0990fc\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-client-ca\") pod \"be3e1335-148c-422f-82e3-5167ab0990fc\" (UID: \"be3e1335-148c-422f-82e3-5167ab0990fc\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.328596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.328918 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.828905647 +0000 UTC m=+105.792867991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.329730 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f4b69a64-a9c5-41e6-81f6-15149754f232" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.329937 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4b69a64-a9c5-41e6-81f6-15149754f232" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.334494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "be3e1335-148c-422f-82e3-5167ab0990fc" (UID: "be3e1335-148c-422f-82e3-5167ab0990fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.335019 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-config" (OuterVolumeSpecName: "config") pod "be3e1335-148c-422f-82e3-5167ab0990fc" (UID: "be3e1335-148c-422f-82e3-5167ab0990fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.336146 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config" (OuterVolumeSpecName: "config") pod "f4b69a64-a9c5-41e6-81f6-15149754f232" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.343912 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:16 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:16 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:16 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.343969 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.360333 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3e1335-148c-422f-82e3-5167ab0990fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be3e1335-148c-422f-82e3-5167ab0990fc" (UID: "be3e1335-148c-422f-82e3-5167ab0990fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.361229 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf" (OuterVolumeSpecName: "kube-api-access-rxmrf") pod "f4b69a64-a9c5-41e6-81f6-15149754f232" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232"). InnerVolumeSpecName "kube-api-access-rxmrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.361559 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4b69a64-a9c5-41e6-81f6-15149754f232" (UID: "f4b69a64-a9c5-41e6-81f6-15149754f232"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.362719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3e1335-148c-422f-82e3-5167ab0990fc-kube-api-access-jqp5c" (OuterVolumeSpecName: "kube-api-access-jqp5c") pod "be3e1335-148c-422f-82e3-5167ab0990fc" (UID: "be3e1335-148c-422f-82e3-5167ab0990fc"). InnerVolumeSpecName "kube-api-access-jqp5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.392576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.393319 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.397213 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-config\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432203 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-proxy-ca-bundles\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432240 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tfq\" (UniqueName: \"kubernetes.io/projected/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-kube-api-access-p2tfq\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-client-ca\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432330 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-serving-cert\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432382 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432393 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432402 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be3e1335-148c-422f-82e3-5167ab0990fc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432412 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxmrf\" (UniqueName: \"kubernetes.io/projected/f4b69a64-a9c5-41e6-81f6-15149754f232-kube-api-access-rxmrf\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432421 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4b69a64-a9c5-41e6-81f6-15149754f232-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432429 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b69a64-a9c5-41e6-81f6-15149754f232-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432438 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqp5c\" (UniqueName: \"kubernetes.io/projected/be3e1335-148c-422f-82e3-5167ab0990fc-kube-api-access-jqp5c\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432449 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.432459 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be3e1335-148c-422f-82e3-5167ab0990fc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.432535 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:16.932519177 +0000 UTC m=+105.896481521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.515384 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.534053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-serving-cert\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.534089 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-config\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.534109 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-proxy-ca-bundles\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.534141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tfq\" (UniqueName: \"kubernetes.io/projected/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-kube-api-access-p2tfq\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.534164 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.534200 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-client-ca\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.535050 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-client-ca\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.535159 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-proxy-ca-bundles\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.535392 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.035380676 +0000 UTC m=+105.999343010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.535965 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-config\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.542425 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-serving-cert\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.555230 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tfq\" (UniqueName: \"kubernetes.io/projected/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-kube-api-access-p2tfq\") pod \"controller-manager-6fc66cf755-48pv9\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.561702 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.564282 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-d4g6l"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.652352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.652540 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.152514 +0000 UTC m=+106.116476344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.652891 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.654155 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.154142004 +0000 UTC m=+106.118104348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.664094 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-879f6c89f-fzgfd_f4b69a64-a9c5-41e6-81f6-15149754f232/controller-manager/0.log" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.664130 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b69a64-a9c5-41e6-81f6-15149754f232" containerID="a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701" exitCode=2 Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.664221 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.672356 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" event={"ID":"f4b69a64-a9c5-41e6-81f6-15149754f232","Type":"ContainerDied","Data":"a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701"} Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.676259 4687 scope.go:117] "RemoveContainer" containerID="a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.676448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fzgfd" event={"ID":"f4b69a64-a9c5-41e6-81f6-15149754f232","Type":"ContainerDied","Data":"17eec64af3a2bc7d0623b264673fde03237259c97631606ae22c12a6f3b51004"} Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.715830 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fzgfd"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.717757 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fzgfd"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.743646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" event={"ID":"f885e731-aeb8-4832-a0f4-fffe5c762592","Type":"ContainerStarted","Data":"a9750e765cdc43e79ac22fd4d82f5f14677298303b5d726d1db30c901723bff3"} Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.746488 4687 ???:1] "http: TLS handshake error from 192.168.126.11:40606: no serving certificate available for the kubelet" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.749085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" event={"ID":"bb883751-bda8-4227-99fe-74d0b85cff17","Type":"ContainerStarted","Data":"951465e3ef901ca03eca719ed973c994060b98b0391529e122f49bf6e4dfd0dc"} Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.752815 4687 generic.go:334] "Generic (PLEG): container finished" podID="be3e1335-148c-422f-82e3-5167ab0990fc" containerID="786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf" exitCode=0 Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.753660 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.753796 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.754702 4687 scope.go:117] "RemoveContainer" containerID="a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.756254 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701\": container with ID starting with a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701 not found: ID does not exist" containerID="a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.756279 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701"} err="failed to get container status \"a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701\": rpc error: code = NotFound desc = could not find container \"a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701\": container with ID starting with a510a90c12855e5476bd850ca5e504abcf05f67b785127cb2a29a826c1d20701 not found: ID does not exist" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.756383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" event={"ID":"be3e1335-148c-422f-82e3-5167ab0990fc","Type":"ContainerDied","Data":"786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf"} Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.756412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr" event={"ID":"be3e1335-148c-422f-82e3-5167ab0990fc","Type":"ContainerDied","Data":"5e938e29ecf96411c5576b957e16e8f5f42e22b55b85065f07775f810d8fc842"} Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.756423 4687 scope.go:117] "RemoveContainer" containerID="786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.757807 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.257791244 +0000 UTC m=+106.221753588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.758489 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" gracePeriod=30 Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.770072 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.770158 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tpmtb" podStartSLOduration=69.770139918 podStartE2EDuration="1m9.770139918s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:16.761999828 +0000 UTC m=+105.725962172" watchObservedRunningTime="2026-03-12 16:04:16.770139918 +0000 UTC m=+105.734102262" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.774481 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.799161 4687 scope.go:117] "RemoveContainer" containerID="786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.800101 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf\": container with ID starting with 786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf not found: ID does not exist" containerID="786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.800560 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf"} err="failed to get container status \"786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf\": rpc error: code = NotFound desc = could not find container \"786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf\": container with ID starting with 786e5ec6ff6c5c068252b701c5a6771666f5c2294e5563da342f8e5cdf25dbdf not found: ID does not exist" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.836227 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.836275 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w8zkr"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.856320 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.859761 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.359747349 +0000 UTC m=+106.323709693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.960962 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.961164 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.461136819 +0000 UTC m=+106.425099163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.961407 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:16 crc kubenswrapper[4687]: E0312 16:04:16.961692 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.461679254 +0000 UTC m=+106.425641598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.975252 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.976022 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.983009 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 16:04:16 crc kubenswrapper[4687]: I0312 16:04:16.983323 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.036076 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.063074 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.063289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.063347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.063545 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.563520085 +0000 UTC m=+106.527482429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.160124 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc66cf755-48pv9"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.165101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.165215 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.165315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.165486 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.165923 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.665906502 +0000 UTC m=+106.629868846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.191132 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.266409 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.266757 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.766738846 +0000 UTC m=+106.730701190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.350767 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:17 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:17 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:17 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.350823 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.367400 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.367772 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.867759665 +0000 UTC m=+106.831722009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.408124 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.468932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.469171 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:17.969118983 +0000 UTC m=+106.933081327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.477924 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdstv"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.478813 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.481793 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.496259 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdstv"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.563277 4687 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.569741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-utilities\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.569811 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtr5w\" (UniqueName: \"kubernetes.io/projected/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-kube-api-access-gtr5w\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.569839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.569875 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-catalog-content\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.570204 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:18.070190295 +0000 UTC m=+107.034152639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.671955 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.672127 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:18.172102037 +0000 UTC m=+107.136064381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.672158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-catalog-content\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.672182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-utilities\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.672241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtr5w\" (UniqueName: \"kubernetes.io/projected/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-kube-api-access-gtr5w\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.672269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.672581 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:18.172566541 +0000 UTC m=+107.136528875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.672663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-catalog-content\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.672815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-utilities\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.687846 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5vb4"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.688941 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.692136 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.693290 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5vb4"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.695669 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtr5w\" (UniqueName: \"kubernetes.io/projected/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-kube-api-access-gtr5w\") pod \"certified-operators-rdstv\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.748317 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3e1335-148c-422f-82e3-5167ab0990fc" path="/var/lib/kubelet/pods/be3e1335-148c-422f-82e3-5167ab0990fc/volumes" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.749162 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b69a64-a9c5-41e6-81f6-15149754f232" path="/var/lib/kubelet/pods/f4b69a64-a9c5-41e6-81f6-15149754f232/volumes" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.773100 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.773241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-utilities\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.773269 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-catalog-content\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.773325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcpd\" (UniqueName: \"kubernetes.io/projected/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-kube-api-access-jjcpd\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.773433 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:18.273417055 +0000 UTC m=+107.237379399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.785029 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f78e9b3d6f3026f716b7fba970ad322fabe251132cffede27e436935a6c37cb4"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.785075 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d8df792601a2674ce7a0f5765182adffa022f0af220a344a2efde5124a98641d"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.786653 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" event={"ID":"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba","Type":"ContainerStarted","Data":"60a29bc8d77fcbde1baf0104de938f261b831a19eb603d9388272948f7e20efb"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.786711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" event={"ID":"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba","Type":"ContainerStarted","Data":"d95813e427a7a1f4badbe95a5950c60e5ce37f7ecbf2b904e4535edfa354f738"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.787032 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.790744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"4ed5f600d384aee5eeb74e4f6703cde9ca4afdf66f8f925cea30720af2cdc0d0"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.793695 4687 patch_prober.go:28] interesting pod/controller-manager-6fc66cf755-48pv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.793742 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.795861 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a9500649dcf669601cf088e4368988834dda3c9f81af50f810a0f6956b292b84"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.796101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"008bd55ab441a4e462887424c8e7baf41478e03f615b70afe60657ba853d3b5a"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.807521 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" event={"ID":"bb883751-bda8-4227-99fe-74d0b85cff17","Type":"ContainerStarted","Data":"ccde204984042ab9765defc9a8ecf500686db7c9fcdd9e83cbfe5a7c844a8817"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.807726 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.812865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"73f992acf0846b06dd62c2cf4fa3f681c32f10c6b8fe7311f96ad14daa811f84"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.812931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"17b1797e8796521c2ab2491a5e809de34bbc6ed5e7747e645154be2cada747bd"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.813488 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.815201 4687 generic.go:334] "Generic (PLEG): container finished" podID="0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" containerID="bae00bf8f9e71811367ef40e09e7125f32e3a8edb60034c88b2e66c26373fa7c" exitCode=0 Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.815295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" event={"ID":"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2","Type":"ContainerDied","Data":"bae00bf8f9e71811367ef40e09e7125f32e3a8edb60034c88b2e66c26373fa7c"} Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.847697 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" podStartSLOduration=3.847678032 podStartE2EDuration="3.847678032s" podCreationTimestamp="2026-03-12 16:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:17.845864182 +0000 UTC m=+106.809826546" watchObservedRunningTime="2026-03-12 16:04:17.847678032 +0000 UTC m=+106.811640386" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.880469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.880519 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcpd\" (UniqueName: \"kubernetes.io/projected/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-kube-api-access-jjcpd\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.880594 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-utilities\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.880617 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-catalog-content\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.881397 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-catalog-content\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.881699 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 16:04:18.38168478 +0000 UTC m=+107.345647124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kplxq" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.887393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-utilities\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.912826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcpd\" (UniqueName: \"kubernetes.io/projected/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-kube-api-access-jjcpd\") pod \"community-operators-q5vb4\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.916091 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6n92"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.917969 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.933569 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6n92"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.965593 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.981111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.981225 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l245c\" (UniqueName: \"kubernetes.io/projected/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-kube-api-access-l245c\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.981244 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-utilities\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.981307 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-catalog-content\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:17 crc kubenswrapper[4687]: E0312 16:04:17.981432 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 16:04:18.481415105 +0000 UTC m=+107.445377439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.996059 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.996766 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.998699 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 16:04:17 crc kubenswrapper[4687]: I0312 16:04:17.998824 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.000805 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.016516 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.073394 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wprbg"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.074787 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.075118 4687 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-12T16:04:17.563595826Z","Handler":null,"Name":""} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.081231 4687 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.081259 4687 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.082721 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083043 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-utilities\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083119 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgg4t\" (UniqueName: \"kubernetes.io/projected/f4e09ff9-9c13-4757-b36b-5a495caf6f07-kube-api-access-wgg4t\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l245c\" (UniqueName: \"kubernetes.io/projected/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-kube-api-access-l245c\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083180 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-utilities\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083215 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083245 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-catalog-content\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083288 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-catalog-content\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.083770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-catalog-content\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.084065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-utilities\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.085761 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wprbg"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.098286 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.098327 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.106828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l245c\" (UniqueName: \"kubernetes.io/projected/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-kube-api-access-l245c\") pod \"certified-operators-k6n92\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.127697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kplxq\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.147682 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.191186 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.191509 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-utilities\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.191540 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgg4t\" (UniqueName: \"kubernetes.io/projected/f4e09ff9-9c13-4757-b36b-5a495caf6f07-kube-api-access-wgg4t\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.191575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-catalog-content\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.191618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.191637 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.192228 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-catalog-content\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.192273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.192273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-utilities\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.201997 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.210164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.212715 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgg4t\" (UniqueName: \"kubernetes.io/projected/f4e09ff9-9c13-4757-b36b-5a495caf6f07-kube-api-access-wgg4t\") pod \"community-operators-wprbg\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.255097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdstv"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.259633 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.311815 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.348893 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:18 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:18 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:18 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.349294 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.353981 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5vb4"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.417623 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.488786 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kplxq"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.718375 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.719044 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.724768 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.724911 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.725028 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.725432 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.725549 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.725627 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.772946 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.796736 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s"] Mar 12 16:04:18 crc kubenswrapper[4687]: W0312 16:04:18.799586 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode7bb8cda_f2cd_4e23_93bf_6fa87d809da6.slice/crio-12f47bc735c076a916ae79d78ff48d7e8887aef65f6318e806aeb6c5fbb73eb5 WatchSource:0}: Error finding container 12f47bc735c076a916ae79d78ff48d7e8887aef65f6318e806aeb6c5fbb73eb5: Status 404 returned error can't find the container with id 12f47bc735c076a916ae79d78ff48d7e8887aef65f6318e806aeb6c5fbb73eb5 Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.814285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmc2r\" (UniqueName: \"kubernetes.io/projected/e732cd40-8648-4aeb-91f7-627434c8d8e6-kube-api-access-kmc2r\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.814336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-config\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.814409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-client-ca\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.814431 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e732cd40-8648-4aeb-91f7-627434c8d8e6-serving-cert\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.833805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-d4g6l" event={"ID":"bb883751-bda8-4227-99fe-74d0b85cff17","Type":"ContainerStarted","Data":"b73c2638953bf56cd17781a3e593f5bbe4184666f3d8f9cdbe0cb6dc5de3902d"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.888753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" event={"ID":"4907a0ff-69b9-4d86-8d43-b39ff4af8567","Type":"ContainerStarted","Data":"006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.889067 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" event={"ID":"4907a0ff-69b9-4d86-8d43-b39ff4af8567","Type":"ContainerStarted","Data":"a3a1c879f0c2425c556d02b6a7698cdf29bc23dbdc5d0b80c27989f5015224f8"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.889874 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.916286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmc2r\" (UniqueName: \"kubernetes.io/projected/e732cd40-8648-4aeb-91f7-627434c8d8e6-kube-api-access-kmc2r\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.916381 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-config\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.916418 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-client-ca\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.916438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e732cd40-8648-4aeb-91f7-627434c8d8e6-serving-cert\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.918685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-config\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.920476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-client-ca\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.921822 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c7dde9b-51fe-4e2f-9f56-6495662896a1","Type":"ContainerStarted","Data":"a4262059840e7dad177f373f57f4efa705964d6a8075946ad14ab6a1a8d467df"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.921857 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c7dde9b-51fe-4e2f-9f56-6495662896a1","Type":"ContainerStarted","Data":"3209ef2ac6be2d5a597000866456c0594947f12fe3282e7279741a81f41c411f"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.934538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e732cd40-8648-4aeb-91f7-627434c8d8e6-serving-cert\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.935087 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-d4g6l" podStartSLOduration=72.935076791 podStartE2EDuration="1m12.935076791s" podCreationTimestamp="2026-03-12 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:18.852350476 +0000 UTC m=+107.816312820" watchObservedRunningTime="2026-03-12 16:04:18.935076791 +0000 UTC m=+107.899039135" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.936802 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6n92"] Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.951300 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerID="9a692017b620abd6c55cbe7b4259a3ae58cb16eb354a3cefc13fc74864c34af2" exitCode=0 Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.951467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerDied","Data":"9a692017b620abd6c55cbe7b4259a3ae58cb16eb354a3cefc13fc74864c34af2"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.951508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerStarted","Data":"08e9fdbbd71050e3bbd745ebe079ce2d568bf1029ff5d7a99d48e7e8e257cfb4"} Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.955129 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" podStartSLOduration=71.955111863 podStartE2EDuration="1m11.955111863s" podCreationTimestamp="2026-03-12 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:18.952831631 +0000 UTC m=+107.916793975" watchObservedRunningTime="2026-03-12 16:04:18.955111863 +0000 UTC m=+107.919074207" Mar 12 16:04:18 crc kubenswrapper[4687]: I0312 16:04:18.960335 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmc2r\" (UniqueName: \"kubernetes.io/projected/e732cd40-8648-4aeb-91f7-627434c8d8e6-kube-api-access-kmc2r\") pod \"route-controller-manager-98dfd7bd9-vpm4s\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.002059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"6d6232cb7cfaaf007506aaa14e4009faed5cad41b25ea5e933c0c96e25368c04"} Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.002104 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"7259cccc9efc89c5a4d8c81edf56e57dafe8f749d0db7d5a9db1e952c7c2c492"} Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.009547 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6","Type":"ContainerStarted","Data":"12f47bc735c076a916ae79d78ff48d7e8887aef65f6318e806aeb6c5fbb73eb5"} Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.030067 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.030051017 podStartE2EDuration="3.030051017s" podCreationTimestamp="2026-03-12 16:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:19.028057223 +0000 UTC m=+107.992019567" watchObservedRunningTime="2026-03-12 16:04:19.030051017 +0000 UTC m=+107.994013361" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.031680 4687 generic.go:334] "Generic (PLEG): container finished" podID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerID="d13e2fb21ec3b7b35a2a52217fef55a6dd8879b69574fef901aa70b04a671191" exitCode=0 Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.032890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstv" event={"ID":"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06","Type":"ContainerDied","Data":"d13e2fb21ec3b7b35a2a52217fef55a6dd8879b69574fef901aa70b04a671191"} Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.032927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstv" event={"ID":"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06","Type":"ContainerStarted","Data":"23f0513a44f0904283d2e4b9f520503be4aa10ffe2263b249fdd0ec4b6a787df"} Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.043438 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.055905 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.079300 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podStartSLOduration=12.079285097 podStartE2EDuration="12.079285097s" podCreationTimestamp="2026-03-12 16:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:19.078834095 +0000 UTC m=+108.042796439" watchObservedRunningTime="2026-03-12 16:04:19.079285097 +0000 UTC m=+108.043247441" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.163698 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wprbg"] Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.203403 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.215285 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.363063 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:19 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:19 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:19 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.363530 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.404913 4687 ???:1] "http: TLS handshake error from 192.168.126.11:40610: no serving certificate available for the kubelet" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.493391 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vzpsd"] Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.494558 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.496161 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s"] Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.500187 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.510248 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzpsd"] Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.631879 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.651908 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-catalog-content\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.651974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-utilities\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.652006 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mls\" (UniqueName: \"kubernetes.io/projected/1e9d2fa4-739e-470f-95c2-0e547a7e147e-kube-api-access-r4mls\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.747987 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.754530 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q5pf\" (UniqueName: \"kubernetes.io/projected/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-kube-api-access-5q5pf\") pod \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.754574 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-secret-volume\") pod \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.754727 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-config-volume\") pod \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\" (UID: \"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2\") " Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.754845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-utilities\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.754882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mls\" (UniqueName: \"kubernetes.io/projected/1e9d2fa4-739e-470f-95c2-0e547a7e147e-kube-api-access-r4mls\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.754924 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-catalog-content\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.755264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-catalog-content\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.760701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-utilities\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.761299 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-config-volume" (OuterVolumeSpecName: "config-volume") pod "0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" (UID: "0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.761936 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-kube-api-access-5q5pf" (OuterVolumeSpecName: "kube-api-access-5q5pf") pod "0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" (UID: "0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2"). InnerVolumeSpecName "kube-api-access-5q5pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.765441 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" (UID: "0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.793044 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mls\" (UniqueName: \"kubernetes.io/projected/1e9d2fa4-739e-470f-95c2-0e547a7e147e-kube-api-access-r4mls\") pod \"redhat-marketplace-vzpsd\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.832618 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.858498 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.858546 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q5pf\" (UniqueName: \"kubernetes.io/projected/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-kube-api-access-5q5pf\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.858557 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.891162 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glm9k"] Mar 12 16:04:19 crc kubenswrapper[4687]: E0312 16:04:19.891568 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" containerName="collect-profiles" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.891593 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" containerName="collect-profiles" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.891768 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" containerName="collect-profiles" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.892801 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:19 crc kubenswrapper[4687]: I0312 16:04:19.918127 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glm9k"] Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.059745 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-catalog-content\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.059854 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-utilities\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.059877 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4rq\" (UniqueName: \"kubernetes.io/projected/be373874-6fcd-41ac-ba5a-3090a71a145f-kube-api-access-dx4rq\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.064337 4687 generic.go:334] "Generic (PLEG): container finished" podID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerID="ab39192f516ed0c902d0baa4380dfc3639764d1e7edc20970200bada7bfb2359" exitCode=0 Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.064471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6n92" event={"ID":"6af9f3dd-14a8-4a00-b056-4d655e17b3f4","Type":"ContainerDied","Data":"ab39192f516ed0c902d0baa4380dfc3639764d1e7edc20970200bada7bfb2359"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.064524 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6n92" event={"ID":"6af9f3dd-14a8-4a00-b056-4d655e17b3f4","Type":"ContainerStarted","Data":"ebaeef191f0877d1d414ab6639ffb6c11954e1e9dc923d3c2da456c5d2572f53"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.076211 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" event={"ID":"e732cd40-8648-4aeb-91f7-627434c8d8e6","Type":"ContainerStarted","Data":"3ae16381352381f26af9f287aaad61d03c10f52b28cd1d249fb571d17a56b65b"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.076285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" event={"ID":"e732cd40-8648-4aeb-91f7-627434c8d8e6","Type":"ContainerStarted","Data":"3ae5c401ff6f95d42a243ccd31afad2b78602b2a7dbb0ee402090ffffda8c538"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.076644 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.080227 4687 generic.go:334] "Generic (PLEG): container finished" podID="e7bb8cda-f2cd-4e23-93bf-6fa87d809da6" containerID="c71541c6ac8b7842b17b2e8a13ed7b0174447046311655824d24cf8f9b91c244" exitCode=0 Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.080400 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6","Type":"ContainerDied","Data":"c71541c6ac8b7842b17b2e8a13ed7b0174447046311655824d24cf8f9b91c244"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.112095 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.113161 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2" event={"ID":"0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2","Type":"ContainerDied","Data":"5e072bd429155c944c5fdc472e08172aceb9d7e5c0d78311d8b55b29009c34df"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.113206 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e072bd429155c944c5fdc472e08172aceb9d7e5c0d78311d8b55b29009c34df" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.115339 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" podStartSLOduration=6.115327279 podStartE2EDuration="6.115327279s" podCreationTimestamp="2026-03-12 16:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:20.113375457 +0000 UTC m=+109.077337801" watchObservedRunningTime="2026-03-12 16:04:20.115327279 +0000 UTC m=+109.079289623" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.128651 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerID="fe43ac87f41c5417fa227a97e8af181b72572f6c3c5ff9943be20152d0fefbac" exitCode=0 Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.128709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerDied","Data":"fe43ac87f41c5417fa227a97e8af181b72572f6c3c5ff9943be20152d0fefbac"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.128734 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerStarted","Data":"8d55310491677d0527051d90632c9bf29a1ec9d5e97fc3d709fdb84f2aea405d"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.131923 4687 generic.go:334] "Generic (PLEG): container finished" podID="2c7dde9b-51fe-4e2f-9f56-6495662896a1" containerID="a4262059840e7dad177f373f57f4efa705964d6a8075946ad14ab6a1a8d467df" exitCode=0 Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.132423 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c7dde9b-51fe-4e2f-9f56-6495662896a1","Type":"ContainerDied","Data":"a4262059840e7dad177f373f57f4efa705964d6a8075946ad14ab6a1a8d467df"} Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.164825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-utilities\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.164868 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4rq\" (UniqueName: \"kubernetes.io/projected/be373874-6fcd-41ac-ba5a-3090a71a145f-kube-api-access-dx4rq\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.164940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-catalog-content\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.167662 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-catalog-content\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.167805 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-utilities\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.219249 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4rq\" (UniqueName: \"kubernetes.io/projected/be373874-6fcd-41ac-ba5a-3090a71a145f-kube-api-access-dx4rq\") pod \"redhat-marketplace-glm9k\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.241480 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.241540 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.241826 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.241856 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.247627 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.251477 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.252256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.275143 4687 patch_prober.go:28] interesting pod/console-f9d7485db-l4j4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.275645 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l4j4z" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.348990 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.363298 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:20 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:20 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:20 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.363392 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.566917 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzpsd"] Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.685839 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8z8d"] Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.689093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.692554 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.698063 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8z8d"] Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.719764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glm9k"] Mar 12 16:04:20 crc kubenswrapper[4687]: W0312 16:04:20.763709 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe373874_6fcd_41ac_ba5a_3090a71a145f.slice/crio-718050d5c190dcfbdd1090e806e702c8e0c5fe6b4744baeaa49c972f9bac8b67 WatchSource:0}: Error finding container 718050d5c190dcfbdd1090e806e702c8e0c5fe6b4744baeaa49c972f9bac8b67: Status 404 returned error can't find the container with id 718050d5c190dcfbdd1090e806e702c8e0c5fe6b4744baeaa49c972f9bac8b67 Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.785643 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpxj9\" (UniqueName: \"kubernetes.io/projected/09abed8f-f9fb-41fd-b864-daf0e5026ae5-kube-api-access-wpxj9\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.785688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-catalog-content\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.785721 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-utilities\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.887993 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-catalog-content\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.888044 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-utilities\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.888108 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpxj9\" (UniqueName: \"kubernetes.io/projected/09abed8f-f9fb-41fd-b864-daf0e5026ae5-kube-api-access-wpxj9\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.888525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-catalog-content\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.888636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-utilities\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.920033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpxj9\" (UniqueName: \"kubernetes.io/projected/09abed8f-f9fb-41fd-b864-daf0e5026ae5-kube-api-access-wpxj9\") pod \"redhat-operators-t8z8d\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:20 crc kubenswrapper[4687]: I0312 16:04:20.952336 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.030863 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.077900 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kbrlj"] Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.079474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: E0312 16:04:21.084165 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.086950 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbrlj"] Mar 12 16:04:21 crc kubenswrapper[4687]: E0312 16:04:21.099804 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:21 crc kubenswrapper[4687]: E0312 16:04:21.101655 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:21 crc kubenswrapper[4687]: E0312 16:04:21.101738 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.143987 4687 generic.go:334] "Generic (PLEG): container finished" podID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerID="3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606" exitCode=0 Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.144178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glm9k" event={"ID":"be373874-6fcd-41ac-ba5a-3090a71a145f","Type":"ContainerDied","Data":"3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606"} Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.144207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glm9k" event={"ID":"be373874-6fcd-41ac-ba5a-3090a71a145f","Type":"ContainerStarted","Data":"718050d5c190dcfbdd1090e806e702c8e0c5fe6b4744baeaa49c972f9bac8b67"} Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.166264 4687 generic.go:334] "Generic (PLEG): container finished" podID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerID="4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9" exitCode=0 Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.167889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzpsd" event={"ID":"1e9d2fa4-739e-470f-95c2-0e547a7e147e","Type":"ContainerDied","Data":"4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9"} Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.167943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzpsd" event={"ID":"1e9d2fa4-739e-470f-95c2-0e547a7e147e","Type":"ContainerStarted","Data":"d0784a137f4466891cb57079ab18d4877139f21ab8db7784f25010d34832391d"} Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.196040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzpbh\" (UniqueName: \"kubernetes.io/projected/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-kube-api-access-dzpbh\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.196103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-catalog-content\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.196129 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-utilities\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.298023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpbh\" (UniqueName: \"kubernetes.io/projected/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-kube-api-access-dzpbh\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.298098 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-catalog-content\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.298174 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-utilities\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.302080 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-catalog-content\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.302780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-utilities\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.322430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpbh\" (UniqueName: \"kubernetes.io/projected/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-kube-api-access-dzpbh\") pod \"redhat-operators-kbrlj\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.353834 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:21 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Mar 12 16:04:21 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:21 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.353888 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.407641 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.468664 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.589775 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.618621 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kube-api-access\") pod \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.619540 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kubelet-dir\") pod \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\" (UID: \"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6\") " Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.622501 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7bb8cda-f2cd-4e23-93bf-6fa87d809da6" (UID: "e7bb8cda-f2cd-4e23-93bf-6fa87d809da6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.629073 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7bb8cda-f2cd-4e23-93bf-6fa87d809da6" (UID: "e7bb8cda-f2cd-4e23-93bf-6fa87d809da6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.738418 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kubelet-dir\") pod \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.738496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kube-api-access\") pod \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\" (UID: \"2c7dde9b-51fe-4e2f-9f56-6495662896a1\") " Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.738752 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.738764 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7bb8cda-f2cd-4e23-93bf-6fa87d809da6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.739461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2c7dde9b-51fe-4e2f-9f56-6495662896a1" (UID: "2c7dde9b-51fe-4e2f-9f56-6495662896a1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.743400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2c7dde9b-51fe-4e2f-9f56-6495662896a1" (UID: "2c7dde9b-51fe-4e2f-9f56-6495662896a1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.771674 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbrlj"] Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.771738 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.808785 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8z8d"] Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.839925 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:21 crc kubenswrapper[4687]: I0312 16:04:21.839970 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c7dde9b-51fe-4e2f-9f56-6495662896a1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.208536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerStarted","Data":"5c46d05ce32d24b94184055db1aadf3ca5f656b63bd8b7e10b42a719a8cde47a"} Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.225646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c7dde9b-51fe-4e2f-9f56-6495662896a1","Type":"ContainerDied","Data":"3209ef2ac6be2d5a597000866456c0594947f12fe3282e7279741a81f41c411f"} Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.225723 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3209ef2ac6be2d5a597000866456c0594947f12fe3282e7279741a81f41c411f" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.225828 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.272871 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.272841751 podStartE2EDuration="1.272841751s" podCreationTimestamp="2026-03-12 16:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:22.271044243 +0000 UTC m=+111.235006587" watchObservedRunningTime="2026-03-12 16:04:22.272841751 +0000 UTC m=+111.236804095" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.276568 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.276700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7bb8cda-f2cd-4e23-93bf-6fa87d809da6","Type":"ContainerDied","Data":"12f47bc735c076a916ae79d78ff48d7e8887aef65f6318e806aeb6c5fbb73eb5"} Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.276742 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f47bc735c076a916ae79d78ff48d7e8887aef65f6318e806aeb6c5fbb73eb5" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.284405 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbrlj" event={"ID":"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb","Type":"ContainerStarted","Data":"fb4522d4863004e09b552a8d1eb81c60a27577e47c3b9f431addca61515c847e"} Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.347841 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 16:04:22 crc kubenswrapper[4687]: [+]has-synced ok Mar 12 16:04:22 crc kubenswrapper[4687]: [+]process-running ok Mar 12 16:04:22 crc kubenswrapper[4687]: healthz check failed Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.347917 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 16:04:22 crc kubenswrapper[4687]: E0312 16:04:22.719003 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09abed8f_f9fb_41fd_b864_daf0e5026ae5.slice/crio-conmon-7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:04:22 crc kubenswrapper[4687]: I0312 16:04:22.887925 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mpnwm" Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.302693 4687 generic.go:334] "Generic (PLEG): container finished" podID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerID="523a4a378a98c061599dbfc6fa7a7058a6300c297b9829827999b831ccbffb37" exitCode=0 Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.302749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbrlj" event={"ID":"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb","Type":"ContainerDied","Data":"523a4a378a98c061599dbfc6fa7a7058a6300c297b9829827999b831ccbffb37"} Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.306214 4687 generic.go:334] "Generic (PLEG): container finished" podID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerID="7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9" exitCode=0 Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.306245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerDied","Data":"7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9"} Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.348085 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.350999 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 16:04:23 crc kubenswrapper[4687]: I0312 16:04:23.733719 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:04:24 crc kubenswrapper[4687]: I0312 16:04:24.553653 4687 ???:1] "http: TLS handshake error from 192.168.126.11:40612: no serving certificate available for the kubelet" Mar 12 16:04:24 crc kubenswrapper[4687]: I0312 16:04:24.655158 4687 ???:1] "http: TLS handshake error from 192.168.126.11:40626: no serving certificate available for the kubelet" Mar 12 16:04:25 crc kubenswrapper[4687]: I0312 16:04:25.332908 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 16:04:25 crc kubenswrapper[4687]: I0312 16:04:25.335574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d"} Mar 12 16:04:25 crc kubenswrapper[4687]: I0312 16:04:25.336634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:04:25 crc kubenswrapper[4687]: I0312 16:04:25.356657 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=41.356641069 podStartE2EDuration="41.356641069s" podCreationTimestamp="2026-03-12 16:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:25.355908689 +0000 UTC m=+114.319871033" watchObservedRunningTime="2026-03-12 16:04:25.356641069 +0000 UTC m=+114.320603413" Mar 12 16:04:30 crc kubenswrapper[4687]: I0312 16:04:30.241082 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 16:04:30 crc kubenswrapper[4687]: I0312 16:04:30.263783 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:30 crc kubenswrapper[4687]: I0312 16:04:30.267293 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:04:31 crc kubenswrapper[4687]: E0312 16:04:31.080489 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:31 crc kubenswrapper[4687]: E0312 16:04:31.082049 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:31 crc kubenswrapper[4687]: E0312 16:04:31.083246 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:31 crc kubenswrapper[4687]: E0312 16:04:31.083274 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:04:31 crc kubenswrapper[4687]: I0312 16:04:31.748624 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 12 16:04:33 crc kubenswrapper[4687]: I0312 16:04:33.851517 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fc66cf755-48pv9"] Mar 12 16:04:33 crc kubenswrapper[4687]: I0312 16:04:33.851977 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerName="controller-manager" containerID="cri-o://60a29bc8d77fcbde1baf0104de938f261b831a19eb603d9388272948f7e20efb" gracePeriod=30 Mar 12 16:04:33 crc kubenswrapper[4687]: I0312 16:04:33.865854 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s"] Mar 12 16:04:33 crc kubenswrapper[4687]: I0312 16:04:33.866103 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" podUID="e732cd40-8648-4aeb-91f7-627434c8d8e6" containerName="route-controller-manager" containerID="cri-o://3ae16381352381f26af9f287aaad61d03c10f52b28cd1d249fb571d17a56b65b" gracePeriod=30 Mar 12 16:04:33 crc kubenswrapper[4687]: I0312 16:04:33.878255 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.878230605 podStartE2EDuration="2.878230605s" podCreationTimestamp="2026-03-12 16:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:33.876436507 +0000 UTC m=+122.840398851" watchObservedRunningTime="2026-03-12 16:04:33.878230605 +0000 UTC m=+122.842192949" Mar 12 16:04:34 crc kubenswrapper[4687]: I0312 16:04:34.396283 4687 generic.go:334] "Generic (PLEG): container finished" podID="e732cd40-8648-4aeb-91f7-627434c8d8e6" containerID="3ae16381352381f26af9f287aaad61d03c10f52b28cd1d249fb571d17a56b65b" exitCode=0 Mar 12 16:04:34 crc kubenswrapper[4687]: I0312 16:04:34.396386 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" event={"ID":"e732cd40-8648-4aeb-91f7-627434c8d8e6","Type":"ContainerDied","Data":"3ae16381352381f26af9f287aaad61d03c10f52b28cd1d249fb571d17a56b65b"} Mar 12 16:04:34 crc kubenswrapper[4687]: I0312 16:04:34.398475 4687 generic.go:334] "Generic (PLEG): container finished" podID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerID="60a29bc8d77fcbde1baf0104de938f261b831a19eb603d9388272948f7e20efb" exitCode=0 Mar 12 16:04:34 crc kubenswrapper[4687]: I0312 16:04:34.398512 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" event={"ID":"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba","Type":"ContainerDied","Data":"60a29bc8d77fcbde1baf0104de938f261b831a19eb603d9388272948f7e20efb"} Mar 12 16:04:34 crc kubenswrapper[4687]: I0312 16:04:34.758241 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:04:34 crc kubenswrapper[4687]: I0312 16:04:34.821333 4687 ???:1] "http: TLS handshake error from 192.168.126.11:50040: no serving certificate available for the kubelet" Mar 12 16:04:36 crc kubenswrapper[4687]: I0312 16:04:36.562495 4687 patch_prober.go:28] interesting pod/controller-manager-6fc66cf755-48pv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 12 16:04:36 crc kubenswrapper[4687]: I0312 16:04:36.562726 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 12 16:04:38 crc kubenswrapper[4687]: I0312 16:04:38.154568 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:04:39 crc kubenswrapper[4687]: I0312 16:04:39.044776 4687 patch_prober.go:28] interesting pod/route-controller-manager-98dfd7bd9-vpm4s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Mar 12 16:04:39 crc kubenswrapper[4687]: I0312 16:04:39.044907 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" podUID="e732cd40-8648-4aeb-91f7-627434c8d8e6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Mar 12 16:04:41 crc kubenswrapper[4687]: E0312 16:04:41.079208 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:41 crc kubenswrapper[4687]: E0312 16:04:41.080730 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:41 crc kubenswrapper[4687]: E0312 16:04:41.081910 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:41 crc kubenswrapper[4687]: E0312 16:04:41.081946 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.208863 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.244784 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6567fb586c-2pcl6"] Mar 12 16:04:42 crc kubenswrapper[4687]: E0312 16:04:42.245106 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerName="controller-manager" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245130 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerName="controller-manager" Mar 12 16:04:42 crc kubenswrapper[4687]: E0312 16:04:42.245148 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7dde9b-51fe-4e2f-9f56-6495662896a1" containerName="pruner" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245157 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7dde9b-51fe-4e2f-9f56-6495662896a1" containerName="pruner" Mar 12 16:04:42 crc kubenswrapper[4687]: E0312 16:04:42.245171 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bb8cda-f2cd-4e23-93bf-6fa87d809da6" containerName="pruner" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245181 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bb8cda-f2cd-4e23-93bf-6fa87d809da6" containerName="pruner" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245295 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" containerName="controller-manager" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245312 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7bb8cda-f2cd-4e23-93bf-6fa87d809da6" containerName="pruner" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245323 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7dde9b-51fe-4e2f-9f56-6495662896a1" containerName="pruner" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.245794 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.255560 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6567fb586c-2pcl6"] Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258556 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-config\") pod \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258625 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2tfq\" (UniqueName: \"kubernetes.io/projected/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-kube-api-access-p2tfq\") pod \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258652 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-client-ca\") pod \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258683 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-serving-cert\") pod \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258763 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-proxy-ca-bundles\") pod \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\" (UID: \"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba\") " Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258869 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-client-ca\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258914 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmkxb\" (UniqueName: \"kubernetes.io/projected/5fe41b48-7128-4c8d-ab08-3598506bc103-kube-api-access-kmkxb\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258952 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-config\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe41b48-7128-4c8d-ab08-3598506bc103-serving-cert\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.258992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-proxy-ca-bundles\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.260186 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-config" (OuterVolumeSpecName: "config") pod "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" (UID: "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.264414 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" (UID: "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.264834 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" (UID: "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.273246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-kube-api-access-p2tfq" (OuterVolumeSpecName: "kube-api-access-p2tfq") pod "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" (UID: "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba"). InnerVolumeSpecName "kube-api-access-p2tfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.283541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" (UID: "387d27a5-27a6-4f5b-9c6f-a45cb6c925ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-client-ca\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360348 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmkxb\" (UniqueName: \"kubernetes.io/projected/5fe41b48-7128-4c8d-ab08-3598506bc103-kube-api-access-kmkxb\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360417 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-config\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360436 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe41b48-7128-4c8d-ab08-3598506bc103-serving-cert\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-proxy-ca-bundles\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360536 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360553 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360562 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2tfq\" (UniqueName: \"kubernetes.io/projected/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-kube-api-access-p2tfq\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360571 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.360579 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.361755 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-proxy-ca-bundles\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.362293 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-client-ca\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.364163 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-config\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.369708 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe41b48-7128-4c8d-ab08-3598506bc103-serving-cert\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.384816 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmkxb\" (UniqueName: \"kubernetes.io/projected/5fe41b48-7128-4c8d-ab08-3598506bc103-kube-api-access-kmkxb\") pod \"controller-manager-6567fb586c-2pcl6\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.447388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" event={"ID":"387d27a5-27a6-4f5b-9c6f-a45cb6c925ba","Type":"ContainerDied","Data":"d95813e427a7a1f4badbe95a5950c60e5ce37f7ecbf2b904e4535edfa354f738"} Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.447453 4687 scope.go:117] "RemoveContainer" containerID="60a29bc8d77fcbde1baf0104de938f261b831a19eb603d9388272948f7e20efb" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.447462 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc66cf755-48pv9" Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.474844 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fc66cf755-48pv9"] Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.477388 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fc66cf755-48pv9"] Mar 12 16:04:42 crc kubenswrapper[4687]: I0312 16:04:42.570022 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:43 crc kubenswrapper[4687]: I0312 16:04:43.831602 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387d27a5-27a6-4f5b-9c6f-a45cb6c925ba" path="/var/lib/kubelet/pods/387d27a5-27a6-4f5b-9c6f-a45cb6c925ba/volumes" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.079644 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.102551 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784784d859-zht8b"] Mar 12 16:04:47 crc kubenswrapper[4687]: E0312 16:04:47.102842 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e732cd40-8648-4aeb-91f7-627434c8d8e6" containerName="route-controller-manager" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.102858 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e732cd40-8648-4aeb-91f7-627434c8d8e6" containerName="route-controller-manager" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.102979 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e732cd40-8648-4aeb-91f7-627434c8d8e6" containerName="route-controller-manager" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.103410 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.114509 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784784d859-zht8b"] Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226113 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-client-ca\") pod \"e732cd40-8648-4aeb-91f7-627434c8d8e6\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226216 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmc2r\" (UniqueName: \"kubernetes.io/projected/e732cd40-8648-4aeb-91f7-627434c8d8e6-kube-api-access-kmc2r\") pod \"e732cd40-8648-4aeb-91f7-627434c8d8e6\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226344 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-config\") pod \"e732cd40-8648-4aeb-91f7-627434c8d8e6\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e732cd40-8648-4aeb-91f7-627434c8d8e6-serving-cert\") pod \"e732cd40-8648-4aeb-91f7-627434c8d8e6\" (UID: \"e732cd40-8648-4aeb-91f7-627434c8d8e6\") " Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d623e3b-8560-4189-a616-63d6625b5c3d-serving-cert\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226652 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-config\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226678 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mvn\" (UniqueName: \"kubernetes.io/projected/8d623e3b-8560-4189-a616-63d6625b5c3d-kube-api-access-s4mvn\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.226779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-client-ca\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.227500 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "e732cd40-8648-4aeb-91f7-627434c8d8e6" (UID: "e732cd40-8648-4aeb-91f7-627434c8d8e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.227755 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-config" (OuterVolumeSpecName: "config") pod "e732cd40-8648-4aeb-91f7-627434c8d8e6" (UID: "e732cd40-8648-4aeb-91f7-627434c8d8e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.232419 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e732cd40-8648-4aeb-91f7-627434c8d8e6-kube-api-access-kmc2r" (OuterVolumeSpecName: "kube-api-access-kmc2r") pod "e732cd40-8648-4aeb-91f7-627434c8d8e6" (UID: "e732cd40-8648-4aeb-91f7-627434c8d8e6"). InnerVolumeSpecName "kube-api-access-kmc2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.241125 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e732cd40-8648-4aeb-91f7-627434c8d8e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e732cd40-8648-4aeb-91f7-627434c8d8e6" (UID: "e732cd40-8648-4aeb-91f7-627434c8d8e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328152 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-client-ca\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d623e3b-8560-4189-a616-63d6625b5c3d-serving-cert\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-config\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mvn\" (UniqueName: \"kubernetes.io/projected/8d623e3b-8560-4189-a616-63d6625b5c3d-kube-api-access-s4mvn\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328335 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328348 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmc2r\" (UniqueName: \"kubernetes.io/projected/e732cd40-8648-4aeb-91f7-627434c8d8e6-kube-api-access-kmc2r\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328370 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e732cd40-8648-4aeb-91f7-627434c8d8e6-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.328379 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e732cd40-8648-4aeb-91f7-627434c8d8e6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.330076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-client-ca\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.331265 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-config\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.333841 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d623e3b-8560-4189-a616-63d6625b5c3d-serving-cert\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.347712 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mvn\" (UniqueName: \"kubernetes.io/projected/8d623e3b-8560-4189-a616-63d6625b5c3d-kube-api-access-s4mvn\") pod \"route-controller-manager-784784d859-zht8b\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.433887 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.825523 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" event={"ID":"e732cd40-8648-4aeb-91f7-627434c8d8e6","Type":"ContainerDied","Data":"3ae5c401ff6f95d42a243ccd31afad2b78602b2a7dbb0ee402090ffffda8c538"} Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.825547 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.828914 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wn49x_8e75a98a-387a-4b8e-9b01-dc788830f478/kube-multus-additional-cni-plugins/0.log" Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.828956 4687 generic.go:334] "Generic (PLEG): container finished" podID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" exitCode=137 Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.828988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" event={"ID":"8e75a98a-387a-4b8e-9b01-dc788830f478","Type":"ContainerDied","Data":"99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841"} Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.839587 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s"] Mar 12 16:04:47 crc kubenswrapper[4687]: I0312 16:04:47.846101 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98dfd7bd9-vpm4s"] Mar 12 16:04:49 crc kubenswrapper[4687]: E0312 16:04:49.147457 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 12 16:04:49 crc kubenswrapper[4687]: E0312 16:04:49.147938 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:04:49 crc kubenswrapper[4687]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 12 16:04:49 crc kubenswrapper[4687]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dchh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29555524-2x297_openshift-infra(dd5ffe1b-8f89-43f0-95fb-8e3823891f2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 12 16:04:49 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:04:49 crc kubenswrapper[4687]: E0312 16:04:49.149515 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29555524-2x297" podUID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" Mar 12 16:04:49 crc kubenswrapper[4687]: I0312 16:04:49.743944 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e732cd40-8648-4aeb-91f7-627434c8d8e6" path="/var/lib/kubelet/pods/e732cd40-8648-4aeb-91f7-627434c8d8e6/volumes" Mar 12 16:04:49 crc kubenswrapper[4687]: E0312 16:04:49.841495 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29555524-2x297" podUID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" Mar 12 16:04:51 crc kubenswrapper[4687]: E0312 16:04:51.078506 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841 is running failed: container process not found" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:51 crc kubenswrapper[4687]: E0312 16:04:51.079116 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841 is running failed: container process not found" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:51 crc kubenswrapper[4687]: E0312 16:04:51.079624 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841 is running failed: container process not found" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 16:04:51 crc kubenswrapper[4687]: E0312 16:04:51.079663 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:04:51 crc kubenswrapper[4687]: I0312 16:04:51.363527 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 16:04:52 crc kubenswrapper[4687]: E0312 16:04:52.732640 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 12 16:04:52 crc kubenswrapper[4687]: E0312 16:04:52.732850 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpxj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t8z8d_openshift-marketplace(09abed8f-f9fb-41fd-b864-daf0e5026ae5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:52 crc kubenswrapper[4687]: E0312 16:04:52.734098 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t8z8d" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.860923 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784784d859-zht8b"] Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.866342 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567fb586c-2pcl6"] Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.979311 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.980928 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.983867 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.986670 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 16:04:53 crc kubenswrapper[4687]: I0312 16:04:53.986677 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.124121 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.124188 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.225095 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.225481 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.225257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.254447 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:54 crc kubenswrapper[4687]: E0312 16:04:54.273762 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t8z8d" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" Mar 12 16:04:54 crc kubenswrapper[4687]: I0312 16:04:54.299583 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:04:55 crc kubenswrapper[4687]: E0312 16:04:55.383850 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 16:04:55 crc kubenswrapper[4687]: E0312 16:04:55.384241 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjcpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q5vb4_openshift-marketplace(b0ddda34-ec8b-46b6-9b04-dcd34d30177c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:55 crc kubenswrapper[4687]: E0312 16:04:55.386586 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q5vb4" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.361694 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q5vb4" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" Mar 12 16:04:56 crc kubenswrapper[4687]: I0312 16:04:56.401766 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.446838 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.447144 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtr5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rdstv_openshift-marketplace(c19b60e6-9340-4f6a-a9fc-5a804e5d5a06): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.448256 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rdstv" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.497622 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.497773 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wprbg_openshift-marketplace(f4e09ff9-9c13-4757-b36b-5a495caf6f07): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.499235 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wprbg" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.532248 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.532459 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l245c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k6n92_openshift-marketplace(6af9f3dd-14a8-4a00-b056-4d655e17b3f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:56 crc kubenswrapper[4687]: E0312 16:04:56.533687 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k6n92" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.783384 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wprbg" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.783456 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rdstv" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.783496 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k6n92" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" Mar 12 16:04:57 crc kubenswrapper[4687]: I0312 16:04:57.788627 4687 scope.go:117] "RemoveContainer" containerID="3ae16381352381f26af9f287aaad61d03c10f52b28cd1d249fb571d17a56b65b" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.884032 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.884470 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx4rq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-glm9k_openshift-marketplace(be373874-6fcd-41ac-ba5a-3090a71a145f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.885538 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-glm9k" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" Mar 12 16:04:57 crc kubenswrapper[4687]: I0312 16:04:57.892803 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wn49x_8e75a98a-387a-4b8e-9b01-dc788830f478/kube-multus-additional-cni-plugins/0.log" Mar 12 16:04:57 crc kubenswrapper[4687]: I0312 16:04:57.892871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" event={"ID":"8e75a98a-387a-4b8e-9b01-dc788830f478","Type":"ContainerDied","Data":"44eda4d272c0e4545dcbe0e680a511bb296bdfc1a75a8f9efa95d55ab1ff2f6c"} Mar 12 16:04:57 crc kubenswrapper[4687]: I0312 16:04:57.892896 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44eda4d272c0e4545dcbe0e680a511bb296bdfc1a75a8f9efa95d55ab1ff2f6c" Mar 12 16:04:57 crc kubenswrapper[4687]: I0312 16:04:57.909804 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wn49x_8e75a98a-387a-4b8e-9b01-dc788830f478/kube-multus-additional-cni-plugins/0.log" Mar 12 16:04:57 crc kubenswrapper[4687]: I0312 16:04:57.909886 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.955020 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.955179 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4mls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vzpsd_openshift-marketplace(1e9d2fa4-739e-470f-95c2-0e547a7e147e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 16:04:57 crc kubenswrapper[4687]: E0312 16:04:57.956455 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vzpsd" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070336 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8e75a98a-387a-4b8e-9b01-dc788830f478-ready\") pod \"8e75a98a-387a-4b8e-9b01-dc788830f478\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070440 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e75a98a-387a-4b8e-9b01-dc788830f478-tuning-conf-dir\") pod \"8e75a98a-387a-4b8e-9b01-dc788830f478\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070503 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqd52\" (UniqueName: \"kubernetes.io/projected/8e75a98a-387a-4b8e-9b01-dc788830f478-kube-api-access-cqd52\") pod \"8e75a98a-387a-4b8e-9b01-dc788830f478\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e75a98a-387a-4b8e-9b01-dc788830f478-cni-sysctl-allowlist\") pod \"8e75a98a-387a-4b8e-9b01-dc788830f478\" (UID: \"8e75a98a-387a-4b8e-9b01-dc788830f478\") " Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e75a98a-387a-4b8e-9b01-dc788830f478-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "8e75a98a-387a-4b8e-9b01-dc788830f478" (UID: "8e75a98a-387a-4b8e-9b01-dc788830f478"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070781 4687 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e75a98a-387a-4b8e-9b01-dc788830f478-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.070983 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e75a98a-387a-4b8e-9b01-dc788830f478-ready" (OuterVolumeSpecName: "ready") pod "8e75a98a-387a-4b8e-9b01-dc788830f478" (UID: "8e75a98a-387a-4b8e-9b01-dc788830f478"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.071223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e75a98a-387a-4b8e-9b01-dc788830f478-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "8e75a98a-387a-4b8e-9b01-dc788830f478" (UID: "8e75a98a-387a-4b8e-9b01-dc788830f478"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.076592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e75a98a-387a-4b8e-9b01-dc788830f478-kube-api-access-cqd52" (OuterVolumeSpecName: "kube-api-access-cqd52") pod "8e75a98a-387a-4b8e-9b01-dc788830f478" (UID: "8e75a98a-387a-4b8e-9b01-dc788830f478"). InnerVolumeSpecName "kube-api-access-cqd52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.171625 4687 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e75a98a-387a-4b8e-9b01-dc788830f478-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.171662 4687 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8e75a98a-387a-4b8e-9b01-dc788830f478-ready\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.171675 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqd52\" (UniqueName: \"kubernetes.io/projected/8e75a98a-387a-4b8e-9b01-dc788830f478-kube-api-access-cqd52\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.258570 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.273196 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567fb586c-2pcl6"] Mar 12 16:04:58 crc kubenswrapper[4687]: W0312 16:04:58.274569 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe41b48_7128_4c8d_ab08_3598506bc103.slice/crio-e84f918990d30e1b96c548204a12076a102d6048f8f9df80a6d51c1cd29b600d WatchSource:0}: Error finding container e84f918990d30e1b96c548204a12076a102d6048f8f9df80a6d51c1cd29b600d: Status 404 returned error can't find the container with id e84f918990d30e1b96c548204a12076a102d6048f8f9df80a6d51c1cd29b600d Mar 12 16:04:58 crc kubenswrapper[4687]: W0312 16:04:58.277143 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod31f2f4ba_f01d_495f_b2c3_c496f313a74f.slice/crio-3fb75a6b3e9541ba71531b6c73f3c5342c6dcabe54566661c36204dc70ed284c WatchSource:0}: Error finding container 3fb75a6b3e9541ba71531b6c73f3c5342c6dcabe54566661c36204dc70ed284c: Status 404 returned error can't find the container with id 3fb75a6b3e9541ba71531b6c73f3c5342c6dcabe54566661c36204dc70ed284c Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.324234 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784784d859-zht8b"] Mar 12 16:04:58 crc kubenswrapper[4687]: W0312 16:04:58.331995 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d623e3b_8560_4189_a616_63d6625b5c3d.slice/crio-c899ead36daf3e395d57931a049a0a7307fb22fb393ab562b50c7b6826bb7243 WatchSource:0}: Error finding container c899ead36daf3e395d57931a049a0a7307fb22fb393ab562b50c7b6826bb7243: Status 404 returned error can't find the container with id c899ead36daf3e395d57931a049a0a7307fb22fb393ab562b50c7b6826bb7243 Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.898830 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" event={"ID":"5fe41b48-7128-4c8d-ab08-3598506bc103","Type":"ContainerStarted","Data":"da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.899151 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.899163 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" event={"ID":"5fe41b48-7128-4c8d-ab08-3598506bc103","Type":"ContainerStarted","Data":"e84f918990d30e1b96c548204a12076a102d6048f8f9df80a6d51c1cd29b600d"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.898890 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" podUID="5fe41b48-7128-4c8d-ab08-3598506bc103" containerName="controller-manager" containerID="cri-o://da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4" gracePeriod=30 Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.901891 4687 generic.go:334] "Generic (PLEG): container finished" podID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerID="5cf5f4952d2379007fb93b07f572276793c6140f3e4fd6dc93aa413d2f466334" exitCode=0 Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.901941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbrlj" event={"ID":"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb","Type":"ContainerDied","Data":"5cf5f4952d2379007fb93b07f572276793c6140f3e4fd6dc93aa413d2f466334"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.905467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"31f2f4ba-f01d-495f-b2c3-c496f313a74f","Type":"ContainerStarted","Data":"419c2c1b3432eaf896a80e1828e7732a534209f2c9d4906963d60b6a4160e531"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.905504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"31f2f4ba-f01d-495f-b2c3-c496f313a74f","Type":"ContainerStarted","Data":"3fb75a6b3e9541ba71531b6c73f3c5342c6dcabe54566661c36204dc70ed284c"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.907795 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.908165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" event={"ID":"8d623e3b-8560-4189-a616-63d6625b5c3d","Type":"ContainerStarted","Data":"4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.908197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" event={"ID":"8d623e3b-8560-4189-a616-63d6625b5c3d","Type":"ContainerStarted","Data":"c899ead36daf3e395d57931a049a0a7307fb22fb393ab562b50c7b6826bb7243"} Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.908236 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wn49x" Mar 12 16:04:58 crc kubenswrapper[4687]: E0312 16:04:58.909665 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vzpsd" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" Mar 12 16:04:58 crc kubenswrapper[4687]: E0312 16:04:58.910476 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-glm9k" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.914597 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" podUID="8d623e3b-8560-4189-a616-63d6625b5c3d" containerName="route-controller-manager" containerID="cri-o://4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2" gracePeriod=30 Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.925649 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" podStartSLOduration=25.925631439 podStartE2EDuration="25.925631439s" podCreationTimestamp="2026-03-12 16:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:58.919143794 +0000 UTC m=+147.883106158" watchObservedRunningTime="2026-03-12 16:04:58.925631439 +0000 UTC m=+147.889593783" Mar 12 16:04:58 crc kubenswrapper[4687]: I0312 16:04:58.952163 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=5.952147065 podStartE2EDuration="5.952147065s" podCreationTimestamp="2026-03-12 16:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:58.948119836 +0000 UTC m=+147.912082180" watchObservedRunningTime="2026-03-12 16:04:58.952147065 +0000 UTC m=+147.916109409" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.004027 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" podStartSLOduration=26.004011016 podStartE2EDuration="26.004011016s" podCreationTimestamp="2026-03-12 16:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:04:59.002142106 +0000 UTC m=+147.966104460" watchObservedRunningTime="2026-03-12 16:04:59.004011016 +0000 UTC m=+147.967973360" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.038429 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wn49x"] Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.043546 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wn49x"] Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.269583 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.274607 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.396144 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmkxb\" (UniqueName: \"kubernetes.io/projected/5fe41b48-7128-4c8d-ab08-3598506bc103-kube-api-access-kmkxb\") pod \"5fe41b48-7128-4c8d-ab08-3598506bc103\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.396189 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-client-ca\") pod \"8d623e3b-8560-4189-a616-63d6625b5c3d\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.396209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-client-ca\") pod \"5fe41b48-7128-4c8d-ab08-3598506bc103\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.396246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-config\") pod \"8d623e3b-8560-4189-a616-63d6625b5c3d\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.396273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d623e3b-8560-4189-a616-63d6625b5c3d-serving-cert\") pod \"8d623e3b-8560-4189-a616-63d6625b5c3d\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.396840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8d623e3b-8560-4189-a616-63d6625b5c3d" (UID: "8d623e3b-8560-4189-a616-63d6625b5c3d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.397207 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-config" (OuterVolumeSpecName: "config") pod "8d623e3b-8560-4189-a616-63d6625b5c3d" (UID: "8d623e3b-8560-4189-a616-63d6625b5c3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.397301 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fe41b48-7128-4c8d-ab08-3598506bc103" (UID: "5fe41b48-7128-4c8d-ab08-3598506bc103"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.397289 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe41b48-7128-4c8d-ab08-3598506bc103-serving-cert\") pod \"5fe41b48-7128-4c8d-ab08-3598506bc103\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.397698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-proxy-ca-bundles\") pod \"5fe41b48-7128-4c8d-ab08-3598506bc103\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.397720 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-config\") pod \"5fe41b48-7128-4c8d-ab08-3598506bc103\" (UID: \"5fe41b48-7128-4c8d-ab08-3598506bc103\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.398128 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5fe41b48-7128-4c8d-ab08-3598506bc103" (UID: "5fe41b48-7128-4c8d-ab08-3598506bc103"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.398220 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4mvn\" (UniqueName: \"kubernetes.io/projected/8d623e3b-8560-4189-a616-63d6625b5c3d-kube-api-access-s4mvn\") pod \"8d623e3b-8560-4189-a616-63d6625b5c3d\" (UID: \"8d623e3b-8560-4189-a616-63d6625b5c3d\") " Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.398245 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-config" (OuterVolumeSpecName: "config") pod "5fe41b48-7128-4c8d-ab08-3598506bc103" (UID: "5fe41b48-7128-4c8d-ab08-3598506bc103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.398965 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.398988 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.398997 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d623e3b-8560-4189-a616-63d6625b5c3d-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.399009 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.399018 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe41b48-7128-4c8d-ab08-3598506bc103-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.402384 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe41b48-7128-4c8d-ab08-3598506bc103-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fe41b48-7128-4c8d-ab08-3598506bc103" (UID: "5fe41b48-7128-4c8d-ab08-3598506bc103"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.402428 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d623e3b-8560-4189-a616-63d6625b5c3d-kube-api-access-s4mvn" (OuterVolumeSpecName: "kube-api-access-s4mvn") pod "8d623e3b-8560-4189-a616-63d6625b5c3d" (UID: "8d623e3b-8560-4189-a616-63d6625b5c3d"). InnerVolumeSpecName "kube-api-access-s4mvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.402616 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe41b48-7128-4c8d-ab08-3598506bc103-kube-api-access-kmkxb" (OuterVolumeSpecName: "kube-api-access-kmkxb") pod "5fe41b48-7128-4c8d-ab08-3598506bc103" (UID: "5fe41b48-7128-4c8d-ab08-3598506bc103"). InnerVolumeSpecName "kube-api-access-kmkxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.403654 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d623e3b-8560-4189-a616-63d6625b5c3d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8d623e3b-8560-4189-a616-63d6625b5c3d" (UID: "8d623e3b-8560-4189-a616-63d6625b5c3d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.500442 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4mvn\" (UniqueName: \"kubernetes.io/projected/8d623e3b-8560-4189-a616-63d6625b5c3d-kube-api-access-s4mvn\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.500742 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmkxb\" (UniqueName: \"kubernetes.io/projected/5fe41b48-7128-4c8d-ab08-3598506bc103-kube-api-access-kmkxb\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.500753 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d623e3b-8560-4189-a616-63d6625b5c3d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.500764 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe41b48-7128-4c8d-ab08-3598506bc103-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.741281 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" path="/var/lib/kubelet/pods/8e75a98a-387a-4b8e-9b01-dc788830f478/volumes" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.915556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbrlj" event={"ID":"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb","Type":"ContainerStarted","Data":"eabdf4da168b26ba0f5792f00a1eff2c64e2460435baec5b249510a55cfc0462"} Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.917051 4687 generic.go:334] "Generic (PLEG): container finished" podID="31f2f4ba-f01d-495f-b2c3-c496f313a74f" containerID="419c2c1b3432eaf896a80e1828e7732a534209f2c9d4906963d60b6a4160e531" exitCode=0 Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.917099 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"31f2f4ba-f01d-495f-b2c3-c496f313a74f","Type":"ContainerDied","Data":"419c2c1b3432eaf896a80e1828e7732a534209f2c9d4906963d60b6a4160e531"} Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.918464 4687 generic.go:334] "Generic (PLEG): container finished" podID="8d623e3b-8560-4189-a616-63d6625b5c3d" containerID="4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2" exitCode=0 Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.918521 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.918530 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" event={"ID":"8d623e3b-8560-4189-a616-63d6625b5c3d","Type":"ContainerDied","Data":"4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2"} Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.918581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-784784d859-zht8b" event={"ID":"8d623e3b-8560-4189-a616-63d6625b5c3d","Type":"ContainerDied","Data":"c899ead36daf3e395d57931a049a0a7307fb22fb393ab562b50c7b6826bb7243"} Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.918602 4687 scope.go:117] "RemoveContainer" containerID="4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.921736 4687 generic.go:334] "Generic (PLEG): container finished" podID="5fe41b48-7128-4c8d-ab08-3598506bc103" containerID="da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4" exitCode=0 Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.921770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" event={"ID":"5fe41b48-7128-4c8d-ab08-3598506bc103","Type":"ContainerDied","Data":"da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4"} Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.921795 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" event={"ID":"5fe41b48-7128-4c8d-ab08-3598506bc103","Type":"ContainerDied","Data":"e84f918990d30e1b96c548204a12076a102d6048f8f9df80a6d51c1cd29b600d"} Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.921816 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567fb586c-2pcl6" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.932755 4687 scope.go:117] "RemoveContainer" containerID="4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.938964 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kbrlj" podStartSLOduration=3.877243116 podStartE2EDuration="38.938945347s" podCreationTimestamp="2026-03-12 16:04:21 +0000 UTC" firstStartedPulling="2026-03-12 16:04:24.315834718 +0000 UTC m=+113.279797062" lastFinishedPulling="2026-03-12 16:04:59.377536949 +0000 UTC m=+148.341499293" observedRunningTime="2026-03-12 16:04:59.933713595 +0000 UTC m=+148.897675959" watchObservedRunningTime="2026-03-12 16:04:59.938945347 +0000 UTC m=+148.902907691" Mar 12 16:04:59 crc kubenswrapper[4687]: E0312 16:04:59.940376 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2\": container with ID starting with 4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2 not found: ID does not exist" containerID="4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.940421 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2"} err="failed to get container status \"4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2\": rpc error: code = NotFound desc = could not find container \"4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2\": container with ID starting with 4cf40cc70ea334cbe36d68002c27d7588b13f877432cfd94ad019dd9a148dcb2 not found: ID does not exist" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.940446 4687 scope.go:117] "RemoveContainer" containerID="da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.950125 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784784d859-zht8b"] Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.953303 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-784784d859-zht8b"] Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.963411 4687 scope.go:117] "RemoveContainer" containerID="da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4" Mar 12 16:04:59 crc kubenswrapper[4687]: E0312 16:04:59.963848 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4\": container with ID starting with da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4 not found: ID does not exist" containerID="da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.963884 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4"} err="failed to get container status \"da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4\": rpc error: code = NotFound desc = could not find container \"da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4\": container with ID starting with da40bff6ef1791840572d4bbef19300a20c0363dca7e5729ee9963790a9557b4 not found: ID does not exist" Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.972960 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567fb586c-2pcl6"] Mar 12 16:04:59 crc kubenswrapper[4687]: I0312 16:04:59.975718 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6567fb586c-2pcl6"] Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819024 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c9d85f675-crz52"] Mar 12 16:05:00 crc kubenswrapper[4687]: E0312 16:05:00.819285 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819298 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:05:00 crc kubenswrapper[4687]: E0312 16:05:00.819314 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe41b48-7128-4c8d-ab08-3598506bc103" containerName="controller-manager" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819322 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe41b48-7128-4c8d-ab08-3598506bc103" containerName="controller-manager" Mar 12 16:05:00 crc kubenswrapper[4687]: E0312 16:05:00.819330 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d623e3b-8560-4189-a616-63d6625b5c3d" containerName="route-controller-manager" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819336 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d623e3b-8560-4189-a616-63d6625b5c3d" containerName="route-controller-manager" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819462 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e75a98a-387a-4b8e-9b01-dc788830f478" containerName="kube-multus-additional-cni-plugins" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819475 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe41b48-7128-4c8d-ab08-3598506bc103" containerName="controller-manager" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819486 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d623e3b-8560-4189-a616-63d6625b5c3d" containerName="route-controller-manager" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.819895 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.824282 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.824893 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.824913 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.825041 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.824893 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.825117 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.828151 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.829198 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x"] Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.835952 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.836824 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c9d85f675-crz52"] Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.838236 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.838548 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.839982 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.840443 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.840627 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.840678 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.846523 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x"] Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.948984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmzq\" (UniqueName: \"kubernetes.io/projected/f019fa8b-a2da-42f1-91ba-0372fa992a65-kube-api-access-mdmzq\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.949448 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f019fa8b-a2da-42f1-91ba-0372fa992a65-serving-cert\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950208 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-client-ca\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-client-ca\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-proxy-ca-bundles\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-config\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950353 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-serving-cert\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvp2\" (UniqueName: \"kubernetes.io/projected/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-kube-api-access-pwvp2\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:00 crc kubenswrapper[4687]: I0312 16:05:00.950424 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-config\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052013 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-serving-cert\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052071 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwvp2\" (UniqueName: \"kubernetes.io/projected/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-kube-api-access-pwvp2\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052098 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-config\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdmzq\" (UniqueName: \"kubernetes.io/projected/f019fa8b-a2da-42f1-91ba-0372fa992a65-kube-api-access-mdmzq\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052186 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f019fa8b-a2da-42f1-91ba-0372fa992a65-serving-cert\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052214 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-client-ca\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052248 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-client-ca\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-proxy-ca-bundles\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.052300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-config\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.054195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-config\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.054375 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-client-ca\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.055461 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-client-ca\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.055489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-proxy-ca-bundles\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.056787 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-config\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.058609 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f019fa8b-a2da-42f1-91ba-0372fa992a65-serving-cert\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.058748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-serving-cert\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.073843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdmzq\" (UniqueName: \"kubernetes.io/projected/f019fa8b-a2da-42f1-91ba-0372fa992a65-kube-api-access-mdmzq\") pod \"route-controller-manager-9698f6dd4-4nt4x\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.076638 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwvp2\" (UniqueName: \"kubernetes.io/projected/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-kube-api-access-pwvp2\") pod \"controller-manager-7c9d85f675-crz52\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.148502 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.167015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.212801 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.255733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kubelet-dir\") pod \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.255812 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kube-api-access\") pod \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\" (UID: \"31f2f4ba-f01d-495f-b2c3-c496f313a74f\") " Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.256494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31f2f4ba-f01d-495f-b2c3-c496f313a74f" (UID: "31f2f4ba-f01d-495f-b2c3-c496f313a74f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.261398 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31f2f4ba-f01d-495f-b2c3-c496f313a74f" (UID: "31f2f4ba-f01d-495f-b2c3-c496f313a74f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.334915 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c9d85f675-crz52"] Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.356780 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.356811 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31f2f4ba-f01d-495f-b2c3-c496f313a74f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.376666 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x"] Mar 12 16:05:01 crc kubenswrapper[4687]: W0312 16:05:01.382898 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf019fa8b_a2da_42f1_91ba_0372fa992a65.slice/crio-5a7dc8f9e98cce14f4d08b4fbba70b5c9af05ee1af5185301f749f171a7f133e WatchSource:0}: Error finding container 5a7dc8f9e98cce14f4d08b4fbba70b5c9af05ee1af5185301f749f171a7f133e: Status 404 returned error can't find the container with id 5a7dc8f9e98cce14f4d08b4fbba70b5c9af05ee1af5185301f749f171a7f133e Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.410928 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.411039 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.746593 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe41b48-7128-4c8d-ab08-3598506bc103" path="/var/lib/kubelet/pods/5fe41b48-7128-4c8d-ab08-3598506bc103/volumes" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.747636 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d623e3b-8560-4189-a616-63d6625b5c3d" path="/var/lib/kubelet/pods/8d623e3b-8560-4189-a616-63d6625b5c3d/volumes" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.941609 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" event={"ID":"88c4bd45-19f3-4023-a174-25b3dcfe0ef3","Type":"ContainerStarted","Data":"0f541b7ac6f472d5635a6e220b4a4466e8b566d97f51f7bde3aca9c4ddcced1a"} Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.941660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" event={"ID":"88c4bd45-19f3-4023-a174-25b3dcfe0ef3","Type":"ContainerStarted","Data":"99b27098bf00ff1cbe27a5192233c19695082ad085c61a7d142674706563ab3d"} Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.941786 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.943057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" event={"ID":"f019fa8b-a2da-42f1-91ba-0372fa992a65","Type":"ContainerStarted","Data":"2a94892626f059337b70ee4b9cecc73bec2e99e983834cd74df1b31b59951b39"} Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.943083 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" event={"ID":"f019fa8b-a2da-42f1-91ba-0372fa992a65","Type":"ContainerStarted","Data":"5a7dc8f9e98cce14f4d08b4fbba70b5c9af05ee1af5185301f749f171a7f133e"} Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.943383 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.944448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"31f2f4ba-f01d-495f-b2c3-c496f313a74f","Type":"ContainerDied","Data":"3fb75a6b3e9541ba71531b6c73f3c5342c6dcabe54566661c36204dc70ed284c"} Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.944470 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.944480 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb75a6b3e9541ba71531b6c73f3c5342c6dcabe54566661c36204dc70ed284c" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.947514 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:01 crc kubenswrapper[4687]: I0312 16:05:01.969753 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.007249 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" podStartSLOduration=9.007226877 podStartE2EDuration="9.007226877s" podCreationTimestamp="2026-03-12 16:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:01.975421067 +0000 UTC m=+150.939383421" watchObservedRunningTime="2026-03-12 16:05:02.007226877 +0000 UTC m=+150.971189221" Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.429641 4687 csr.go:261] certificate signing request csr-lcv8g is approved, waiting to be issued Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.437827 4687 csr.go:257] certificate signing request csr-lcv8g is issued Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.543734 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kbrlj" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:05:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:05:02 crc kubenswrapper[4687]: > Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.951501 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" containerID="f70e2e2ceb248ada59de7f90e434e97c7ba90306ab1f202b6e4b1d5cb02b724a" exitCode=0 Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.951580 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555524-2x297" event={"ID":"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b","Type":"ContainerDied","Data":"f70e2e2ceb248ada59de7f90e434e97c7ba90306ab1f202b6e4b1d5cb02b724a"} Mar 12 16:05:02 crc kubenswrapper[4687]: I0312 16:05:02.967981 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" podStartSLOduration=9.967962274 podStartE2EDuration="9.967962274s" podCreationTimestamp="2026-03-12 16:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:02.02656715 +0000 UTC m=+150.990529494" watchObservedRunningTime="2026-03-12 16:05:02.967962274 +0000 UTC m=+151.931924618" Mar 12 16:05:03 crc kubenswrapper[4687]: I0312 16:05:03.439729 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-12 12:32:49.569199702 +0000 UTC Mar 12 16:05:03 crc kubenswrapper[4687]: I0312 16:05:03.439953 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6596h27m46.129249515s for next certificate rotation Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.200444 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.322860 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dchh6\" (UniqueName: \"kubernetes.io/projected/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b-kube-api-access-dchh6\") pod \"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b\" (UID: \"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b\") " Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.330836 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b-kube-api-access-dchh6" (OuterVolumeSpecName: "kube-api-access-dchh6") pod "dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" (UID: "dd5ffe1b-8f89-43f0-95fb-8e3823891f2b"). InnerVolumeSpecName "kube-api-access-dchh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.425182 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dchh6\" (UniqueName: \"kubernetes.io/projected/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b-kube-api-access-dchh6\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.440756 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-20 05:50:41.679328704 +0000 UTC Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.440799 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6061h45m37.238532856s for next certificate rotation Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.966843 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555524-2x297" event={"ID":"dd5ffe1b-8f89-43f0-95fb-8e3823891f2b","Type":"ContainerDied","Data":"add797611e7ddb663a31570f338b39a08edb582dca475232cdccd470a00b5b99"} Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.966885 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="add797611e7ddb663a31570f338b39a08edb582dca475232cdccd470a00b5b99" Mar 12 16:05:04 crc kubenswrapper[4687]: I0312 16:05:04.966936 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555524-2x297" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.990594 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 16:05:08 crc kubenswrapper[4687]: E0312 16:05:08.996735 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f2f4ba-f01d-495f-b2c3-c496f313a74f" containerName="pruner" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.996954 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f2f4ba-f01d-495f-b2c3-c496f313a74f" containerName="pruner" Mar 12 16:05:08 crc kubenswrapper[4687]: E0312 16:05:08.997343 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" containerName="oc" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.997387 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" containerName="oc" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.997771 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f2f4ba-f01d-495f-b2c3-c496f313a74f" containerName="pruner" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.997804 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" containerName="oc" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.998477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:08 crc kubenswrapper[4687]: I0312 16:05:08.999240 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.000226 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerStarted","Data":"afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1"} Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.001986 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.002151 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.004235 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerStarted","Data":"95b4626731f5ce90062aa36f6090b63df229d141f2280ae7f65d8a7b20e4acd3"} Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.102739 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/296d3d6d-ef43-4965-a804-d1bb0c126d64-kube-api-access\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.102903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-kubelet-dir\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.102968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-var-lock\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.204345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/296d3d6d-ef43-4965-a804-d1bb0c126d64-kube-api-access\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.204464 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-kubelet-dir\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.204512 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-var-lock\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.204615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-var-lock\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.204636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-kubelet-dir\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.230327 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/296d3d6d-ef43-4965-a804-d1bb0c126d64-kube-api-access\") pod \"installer-9-crc\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.323977 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:09 crc kubenswrapper[4687]: I0312 16:05:09.712149 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 16:05:09 crc kubenswrapper[4687]: W0312 16:05:09.722255 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod296d3d6d_ef43_4965_a804_d1bb0c126d64.slice/crio-1c1bf4af36c153483faa5de3ebbf60da398e29b1b172434ead150376c0df6205 WatchSource:0}: Error finding container 1c1bf4af36c153483faa5de3ebbf60da398e29b1b172434ead150376c0df6205: Status 404 returned error can't find the container with id 1c1bf4af36c153483faa5de3ebbf60da398e29b1b172434ead150376c0df6205 Mar 12 16:05:10 crc kubenswrapper[4687]: I0312 16:05:10.014079 4687 generic.go:334] "Generic (PLEG): container finished" podID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerID="afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1" exitCode=0 Mar 12 16:05:10 crc kubenswrapper[4687]: I0312 16:05:10.014179 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerDied","Data":"afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1"} Mar 12 16:05:10 crc kubenswrapper[4687]: I0312 16:05:10.015809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"296d3d6d-ef43-4965-a804-d1bb0c126d64","Type":"ContainerStarted","Data":"1c1bf4af36c153483faa5de3ebbf60da398e29b1b172434ead150376c0df6205"} Mar 12 16:05:10 crc kubenswrapper[4687]: I0312 16:05:10.018797 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerID="95b4626731f5ce90062aa36f6090b63df229d141f2280ae7f65d8a7b20e4acd3" exitCode=0 Mar 12 16:05:10 crc kubenswrapper[4687]: I0312 16:05:10.018823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerDied","Data":"95b4626731f5ce90062aa36f6090b63df229d141f2280ae7f65d8a7b20e4acd3"} Mar 12 16:05:11 crc kubenswrapper[4687]: I0312 16:05:11.025747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"296d3d6d-ef43-4965-a804-d1bb0c126d64","Type":"ContainerStarted","Data":"e93741b23d08ac5a9ffe5d11b0a09c11346683d4ddf9df96657e37a95b9783b3"} Mar 12 16:05:11 crc kubenswrapper[4687]: I0312 16:05:11.039057 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.03904168 podStartE2EDuration="3.03904168s" podCreationTimestamp="2026-03-12 16:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:11.038559747 +0000 UTC m=+160.002522101" watchObservedRunningTime="2026-03-12 16:05:11.03904168 +0000 UTC m=+160.003004044" Mar 12 16:05:11 crc kubenswrapper[4687]: I0312 16:05:11.515675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:05:11 crc kubenswrapper[4687]: I0312 16:05:11.581213 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:05:12 crc kubenswrapper[4687]: I0312 16:05:12.413681 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbrlj"] Mar 12 16:05:13 crc kubenswrapper[4687]: I0312 16:05:13.034801 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kbrlj" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="registry-server" containerID="cri-o://eabdf4da168b26ba0f5792f00a1eff2c64e2460435baec5b249510a55cfc0462" gracePeriod=2 Mar 12 16:05:13 crc kubenswrapper[4687]: I0312 16:05:13.858281 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c9d85f675-crz52"] Mar 12 16:05:13 crc kubenswrapper[4687]: I0312 16:05:13.858624 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" podUID="88c4bd45-19f3-4023-a174-25b3dcfe0ef3" containerName="controller-manager" containerID="cri-o://0f541b7ac6f472d5635a6e220b4a4466e8b566d97f51f7bde3aca9c4ddcced1a" gracePeriod=30 Mar 12 16:05:13 crc kubenswrapper[4687]: I0312 16:05:13.882591 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x"] Mar 12 16:05:13 crc kubenswrapper[4687]: I0312 16:05:13.883225 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" podUID="f019fa8b-a2da-42f1-91ba-0372fa992a65" containerName="route-controller-manager" containerID="cri-o://2a94892626f059337b70ee4b9cecc73bec2e99e983834cd74df1b31b59951b39" gracePeriod=30 Mar 12 16:05:14 crc kubenswrapper[4687]: I0312 16:05:14.041683 4687 generic.go:334] "Generic (PLEG): container finished" podID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerID="eabdf4da168b26ba0f5792f00a1eff2c64e2460435baec5b249510a55cfc0462" exitCode=0 Mar 12 16:05:14 crc kubenswrapper[4687]: I0312 16:05:14.041730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbrlj" event={"ID":"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb","Type":"ContainerDied","Data":"eabdf4da168b26ba0f5792f00a1eff2c64e2460435baec5b249510a55cfc0462"} Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.049409 4687 generic.go:334] "Generic (PLEG): container finished" podID="88c4bd45-19f3-4023-a174-25b3dcfe0ef3" containerID="0f541b7ac6f472d5635a6e220b4a4466e8b566d97f51f7bde3aca9c4ddcced1a" exitCode=0 Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.049521 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" event={"ID":"88c4bd45-19f3-4023-a174-25b3dcfe0ef3","Type":"ContainerDied","Data":"0f541b7ac6f472d5635a6e220b4a4466e8b566d97f51f7bde3aca9c4ddcced1a"} Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.051678 4687 generic.go:334] "Generic (PLEG): container finished" podID="f019fa8b-a2da-42f1-91ba-0372fa992a65" containerID="2a94892626f059337b70ee4b9cecc73bec2e99e983834cd74df1b31b59951b39" exitCode=0 Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.051712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" event={"ID":"f019fa8b-a2da-42f1-91ba-0372fa992a65","Type":"ContainerDied","Data":"2a94892626f059337b70ee4b9cecc73bec2e99e983834cd74df1b31b59951b39"} Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.475731 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.482330 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.486295 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510149 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl"] Mar 12 16:05:15 crc kubenswrapper[4687]: E0312 16:05:15.510393 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f019fa8b-a2da-42f1-91ba-0372fa992a65" containerName="route-controller-manager" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510406 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f019fa8b-a2da-42f1-91ba-0372fa992a65" containerName="route-controller-manager" Mar 12 16:05:15 crc kubenswrapper[4687]: E0312 16:05:15.510443 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c4bd45-19f3-4023-a174-25b3dcfe0ef3" containerName="controller-manager" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510450 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c4bd45-19f3-4023-a174-25b3dcfe0ef3" containerName="controller-manager" Mar 12 16:05:15 crc kubenswrapper[4687]: E0312 16:05:15.510464 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="extract-utilities" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510471 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="extract-utilities" Mar 12 16:05:15 crc kubenswrapper[4687]: E0312 16:05:15.510483 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="registry-server" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510489 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="registry-server" Mar 12 16:05:15 crc kubenswrapper[4687]: E0312 16:05:15.510499 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="extract-content" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510504 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="extract-content" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510596 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c4bd45-19f3-4023-a174-25b3dcfe0ef3" containerName="controller-manager" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510607 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f019fa8b-a2da-42f1-91ba-0372fa992a65" containerName="route-controller-manager" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.510622 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" containerName="registry-server" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.512507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.521418 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl"] Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.596988 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-serving-cert\") pod \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.597287 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f019fa8b-a2da-42f1-91ba-0372fa992a65-serving-cert\") pod \"f019fa8b-a2da-42f1-91ba-0372fa992a65\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.598063 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-config\") pod \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.598227 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-config\") pod \"f019fa8b-a2da-42f1-91ba-0372fa992a65\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.598343 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzpbh\" (UniqueName: \"kubernetes.io/projected/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-kube-api-access-dzpbh\") pod \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.598575 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-catalog-content\") pod \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.598925 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-config" (OuterVolumeSpecName: "config") pod "88c4bd45-19f3-4023-a174-25b3dcfe0ef3" (UID: "88c4bd45-19f3-4023-a174-25b3dcfe0ef3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599052 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-client-ca\") pod \"f019fa8b-a2da-42f1-91ba-0372fa992a65\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599115 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-config" (OuterVolumeSpecName: "config") pod "f019fa8b-a2da-42f1-91ba-0372fa992a65" (UID: "f019fa8b-a2da-42f1-91ba-0372fa992a65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdmzq\" (UniqueName: \"kubernetes.io/projected/f019fa8b-a2da-42f1-91ba-0372fa992a65-kube-api-access-mdmzq\") pod \"f019fa8b-a2da-42f1-91ba-0372fa992a65\" (UID: \"f019fa8b-a2da-42f1-91ba-0372fa992a65\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599381 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-proxy-ca-bundles\") pod \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599489 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-client-ca\") pod \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599585 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwvp2\" (UniqueName: \"kubernetes.io/projected/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-kube-api-access-pwvp2\") pod \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\" (UID: \"88c4bd45-19f3-4023-a174-25b3dcfe0ef3\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599501 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-client-ca" (OuterVolumeSpecName: "client-ca") pod "f019fa8b-a2da-42f1-91ba-0372fa992a65" (UID: "f019fa8b-a2da-42f1-91ba-0372fa992a65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.599680 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-utilities\") pod \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\" (UID: \"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb\") " Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.600257 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.600502 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.600602 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f019fa8b-a2da-42f1-91ba-0372fa992a65-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.600375 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88c4bd45-19f3-4023-a174-25b3dcfe0ef3" (UID: "88c4bd45-19f3-4023-a174-25b3dcfe0ef3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.600623 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-client-ca" (OuterVolumeSpecName: "client-ca") pod "88c4bd45-19f3-4023-a174-25b3dcfe0ef3" (UID: "88c4bd45-19f3-4023-a174-25b3dcfe0ef3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.603196 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f019fa8b-a2da-42f1-91ba-0372fa992a65-kube-api-access-mdmzq" (OuterVolumeSpecName: "kube-api-access-mdmzq") pod "f019fa8b-a2da-42f1-91ba-0372fa992a65" (UID: "f019fa8b-a2da-42f1-91ba-0372fa992a65"). InnerVolumeSpecName "kube-api-access-mdmzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.604350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f019fa8b-a2da-42f1-91ba-0372fa992a65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f019fa8b-a2da-42f1-91ba-0372fa992a65" (UID: "f019fa8b-a2da-42f1-91ba-0372fa992a65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.608954 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-utilities" (OuterVolumeSpecName: "utilities") pod "82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" (UID: "82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.612021 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88c4bd45-19f3-4023-a174-25b3dcfe0ef3" (UID: "88c4bd45-19f3-4023-a174-25b3dcfe0ef3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.612291 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-kube-api-access-dzpbh" (OuterVolumeSpecName: "kube-api-access-dzpbh") pod "82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" (UID: "82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb"). InnerVolumeSpecName "kube-api-access-dzpbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.619294 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-kube-api-access-pwvp2" (OuterVolumeSpecName: "kube-api-access-pwvp2") pod "88c4bd45-19f3-4023-a174-25b3dcfe0ef3" (UID: "88c4bd45-19f3-4023-a174-25b3dcfe0ef3"). InnerVolumeSpecName "kube-api-access-pwvp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-client-ca\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702403 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-serving-cert\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702429 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghr9\" (UniqueName: \"kubernetes.io/projected/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-kube-api-access-lghr9\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702489 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-config\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702534 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzpbh\" (UniqueName: \"kubernetes.io/projected/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-kube-api-access-dzpbh\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702549 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdmzq\" (UniqueName: \"kubernetes.io/projected/f019fa8b-a2da-42f1-91ba-0372fa992a65-kube-api-access-mdmzq\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702561 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702576 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702590 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwvp2\" (UniqueName: \"kubernetes.io/projected/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-kube-api-access-pwvp2\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702601 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702613 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88c4bd45-19f3-4023-a174-25b3dcfe0ef3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.702630 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f019fa8b-a2da-42f1-91ba-0372fa992a65-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.748210 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" (UID: "82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.804142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-config\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.804265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-client-ca\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.804308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-serving-cert\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.804332 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghr9\" (UniqueName: \"kubernetes.io/projected/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-kube-api-access-lghr9\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.804424 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.805299 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-client-ca\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.805348 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-config\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.808457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-serving-cert\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.820063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghr9\" (UniqueName: \"kubernetes.io/projected/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-kube-api-access-lghr9\") pod \"route-controller-manager-dccfdd746-ft2hl\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:15 crc kubenswrapper[4687]: I0312 16:05:15.838268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.059196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbrlj" event={"ID":"82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb","Type":"ContainerDied","Data":"fb4522d4863004e09b552a8d1eb81c60a27577e47c3b9f431addca61515c847e"} Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.059259 4687 scope.go:117] "RemoveContainer" containerID="eabdf4da168b26ba0f5792f00a1eff2c64e2460435baec5b249510a55cfc0462" Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.059443 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbrlj" Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.061415 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.061475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c9d85f675-crz52" event={"ID":"88c4bd45-19f3-4023-a174-25b3dcfe0ef3","Type":"ContainerDied","Data":"99b27098bf00ff1cbe27a5192233c19695082ad085c61a7d142674706563ab3d"} Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.063458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" event={"ID":"f019fa8b-a2da-42f1-91ba-0372fa992a65","Type":"ContainerDied","Data":"5a7dc8f9e98cce14f4d08b4fbba70b5c9af05ee1af5185301f749f171a7f133e"} Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.063578 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x" Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.087637 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c9d85f675-crz52"] Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.101402 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c9d85f675-crz52"] Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.110280 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x"] Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.115080 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9698f6dd4-4nt4x"] Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.130262 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbrlj"] Mar 12 16:05:16 crc kubenswrapper[4687]: I0312 16:05:16.134142 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kbrlj"] Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.749746 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb" path="/var/lib/kubelet/pods/82a22cf3-9cd4-41cc-9bf2-1d8e9f8cedfb/volumes" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.750836 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c4bd45-19f3-4023-a174-25b3dcfe0ef3" path="/var/lib/kubelet/pods/88c4bd45-19f3-4023-a174-25b3dcfe0ef3/volumes" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.751381 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f019fa8b-a2da-42f1-91ba-0372fa992a65" path="/var/lib/kubelet/pods/f019fa8b-a2da-42f1-91ba-0372fa992a65/volumes" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.835521 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cc74486c8-bztwn"] Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.836652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.839291 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.839626 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.839959 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.840669 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.840742 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.846037 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.851818 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc74486c8-bztwn"] Mar 12 16:05:17 crc kubenswrapper[4687]: I0312 16:05:17.853563 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.030025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-config\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.030324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-client-ca\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.030472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-proxy-ca-bundles\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.030599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6knd\" (UniqueName: \"kubernetes.io/projected/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-kube-api-access-c6knd\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.030707 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-serving-cert\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.131971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-serving-cert\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.132033 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-config\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.132107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-client-ca\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.132135 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-proxy-ca-bundles\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.132163 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6knd\" (UniqueName: \"kubernetes.io/projected/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-kube-api-access-c6knd\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.133770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-config\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.134099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-proxy-ca-bundles\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.134155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-client-ca\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.143256 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-serving-cert\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.147430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6knd\" (UniqueName: \"kubernetes.io/projected/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-kube-api-access-c6knd\") pod \"controller-manager-cc74486c8-bztwn\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:18 crc kubenswrapper[4687]: I0312 16:05:18.169571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:19 crc kubenswrapper[4687]: I0312 16:05:19.171578 4687 scope.go:117] "RemoveContainer" containerID="5cf5f4952d2379007fb93b07f572276793c6140f3e4fd6dc93aa413d2f466334" Mar 12 16:05:19 crc kubenswrapper[4687]: I0312 16:05:19.283329 4687 scope.go:117] "RemoveContainer" containerID="523a4a378a98c061599dbfc6fa7a7058a6300c297b9829827999b831ccbffb37" Mar 12 16:05:19 crc kubenswrapper[4687]: I0312 16:05:19.405480 4687 scope.go:117] "RemoveContainer" containerID="0f541b7ac6f472d5635a6e220b4a4466e8b566d97f51f7bde3aca9c4ddcced1a" Mar 12 16:05:19 crc kubenswrapper[4687]: I0312 16:05:19.460099 4687 scope.go:117] "RemoveContainer" containerID="2a94892626f059337b70ee4b9cecc73bec2e99e983834cd74df1b31b59951b39" Mar 12 16:05:19 crc kubenswrapper[4687]: I0312 16:05:19.480860 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl"] Mar 12 16:05:19 crc kubenswrapper[4687]: I0312 16:05:19.729514 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc74486c8-bztwn"] Mar 12 16:05:19 crc kubenswrapper[4687]: W0312 16:05:19.736708 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38d8a971_f2e0_41b5_8a80_d8bf2bfd7f18.slice/crio-6dd09beb932477983bbaeacf48bf8e8bcf6455086ae651eaee09729f752e3af0 WatchSource:0}: Error finding container 6dd09beb932477983bbaeacf48bf8e8bcf6455086ae651eaee09729f752e3af0: Status 404 returned error can't find the container with id 6dd09beb932477983bbaeacf48bf8e8bcf6455086ae651eaee09729f752e3af0 Mar 12 16:05:19 crc kubenswrapper[4687]: W0312 16:05:19.880203 4687 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9d2fa4_739e_470f_95c2_0e547a7e147e.slice/crio-d0784a137f4466891cb57079ab18d4877139f21ab8db7784f25010d34832391d": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9d2fa4_739e_470f_95c2_0e547a7e147e.slice/crio-d0784a137f4466891cb57079ab18d4877139f21ab8db7784f25010d34832391d/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9d2fa4_739e_470f_95c2_0e547a7e147e.slice/crio-d0784a137f4466891cb57079ab18d4877139f21ab8db7784f25010d34832391d/memory.stat: no such device], continuing to push stats Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.085384 4687 generic.go:334] "Generic (PLEG): container finished" podID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerID="cc240b33be28efd31510f8d62f4d12d6090cf957f2000286d661551605934e8a" exitCode=0 Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.085448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6n92" event={"ID":"6af9f3dd-14a8-4a00-b056-4d655e17b3f4","Type":"ContainerDied","Data":"cc240b33be28efd31510f8d62f4d12d6090cf957f2000286d661551605934e8a"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.090768 4687 generic.go:334] "Generic (PLEG): container finished" podID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerID="de37ba5145ab568517dd5647d7a8f60e5cafd7bce7f2d93edb3fa5d4625c9661" exitCode=0 Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.090834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstv" event={"ID":"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06","Type":"ContainerDied","Data":"de37ba5145ab568517dd5647d7a8f60e5cafd7bce7f2d93edb3fa5d4625c9661"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.092702 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" event={"ID":"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18","Type":"ContainerStarted","Data":"31cf6ba18bef2e494d2db8c9a355c0af266cf71802456ee4df5ad98f74f3c23f"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.092728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" event={"ID":"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18","Type":"ContainerStarted","Data":"6dd09beb932477983bbaeacf48bf8e8bcf6455086ae651eaee09729f752e3af0"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.093050 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.095716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerStarted","Data":"190f56e019824d8294e9b15c8942da58639c0dac502a4e5a4a18710c9d057785"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.098125 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.098347 4687 generic.go:334] "Generic (PLEG): container finished" podID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerID="22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1" exitCode=0 Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.098394 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzpsd" event={"ID":"1e9d2fa4-739e-470f-95c2-0e547a7e147e","Type":"ContainerDied","Data":"22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.101441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerStarted","Data":"3f64d7c163d43b6c3655c1c3f71905d9ac8438aee67f768cde297fd713b8bdf2"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.104404 4687 generic.go:334] "Generic (PLEG): container finished" podID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerID="4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507" exitCode=0 Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.104480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glm9k" event={"ID":"be373874-6fcd-41ac-ba5a-3090a71a145f","Type":"ContainerDied","Data":"4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.108766 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerStarted","Data":"3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.111112 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" event={"ID":"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b","Type":"ContainerStarted","Data":"1891d539638dc088f226fcdf6ddd503c78460ce1be8a20c9356870cf0d822db8"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.111207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" event={"ID":"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b","Type":"ContainerStarted","Data":"75607e2198defeed3caf3a82f79d871bd0d862ca24416648887f51cbb29e0f01"} Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.112378 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.131234 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.213601 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5vb4" podStartSLOduration=2.881003188 podStartE2EDuration="1m3.213585999s" podCreationTimestamp="2026-03-12 16:04:17 +0000 UTC" firstStartedPulling="2026-03-12 16:04:18.953621262 +0000 UTC m=+107.917583606" lastFinishedPulling="2026-03-12 16:05:19.286204073 +0000 UTC m=+168.250166417" observedRunningTime="2026-03-12 16:05:20.211697168 +0000 UTC m=+169.175659512" watchObservedRunningTime="2026-03-12 16:05:20.213585999 +0000 UTC m=+169.177548343" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.275824 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8z8d" podStartSLOduration=4.317703661 podStartE2EDuration="1m0.275810581s" podCreationTimestamp="2026-03-12 16:04:20 +0000 UTC" firstStartedPulling="2026-03-12 16:04:23.307867986 +0000 UTC m=+112.271830330" lastFinishedPulling="2026-03-12 16:05:19.265974916 +0000 UTC m=+168.229937250" observedRunningTime="2026-03-12 16:05:20.273111128 +0000 UTC m=+169.237073462" watchObservedRunningTime="2026-03-12 16:05:20.275810581 +0000 UTC m=+169.239772925" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.332700 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" podStartSLOduration=7.332684067 podStartE2EDuration="7.332684067s" podCreationTimestamp="2026-03-12 16:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:20.331964128 +0000 UTC m=+169.295926472" watchObservedRunningTime="2026-03-12 16:05:20.332684067 +0000 UTC m=+169.296646411" Mar 12 16:05:20 crc kubenswrapper[4687]: I0312 16:05:20.361064 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" podStartSLOduration=7.361047124 podStartE2EDuration="7.361047124s" podCreationTimestamp="2026-03-12 16:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:20.36093269 +0000 UTC m=+169.324895034" watchObservedRunningTime="2026-03-12 16:05:20.361047124 +0000 UTC m=+169.325009468" Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.031993 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.032325 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.120146 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6n92" event={"ID":"6af9f3dd-14a8-4a00-b056-4d655e17b3f4","Type":"ContainerStarted","Data":"8c18c4d8a28bd8d7eff11888601183a8a1e0a49b7c6d863838115b5c0a51d983"} Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.122862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzpsd" event={"ID":"1e9d2fa4-739e-470f-95c2-0e547a7e147e","Type":"ContainerStarted","Data":"7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106"} Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.126126 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glm9k" event={"ID":"be373874-6fcd-41ac-ba5a-3090a71a145f","Type":"ContainerStarted","Data":"53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c"} Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.128479 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstv" event={"ID":"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06","Type":"ContainerStarted","Data":"d54bfc3986b0a496eaab7aa8aece543c66ae5b1b3648f7204a4fe9cc76eae88b"} Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.130438 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerID="190f56e019824d8294e9b15c8942da58639c0dac502a4e5a4a18710c9d057785" exitCode=0 Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.130539 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerDied","Data":"190f56e019824d8294e9b15c8942da58639c0dac502a4e5a4a18710c9d057785"} Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.145955 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6n92" podStartSLOduration=3.521444072 podStartE2EDuration="1m4.14593738s" podCreationTimestamp="2026-03-12 16:04:17 +0000 UTC" firstStartedPulling="2026-03-12 16:04:20.066433908 +0000 UTC m=+109.030396252" lastFinishedPulling="2026-03-12 16:05:20.690927206 +0000 UTC m=+169.654889560" observedRunningTime="2026-03-12 16:05:21.141606273 +0000 UTC m=+170.105568617" watchObservedRunningTime="2026-03-12 16:05:21.14593738 +0000 UTC m=+170.109899724" Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.167328 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdstv" podStartSLOduration=2.482506571 podStartE2EDuration="1m4.167309337s" podCreationTimestamp="2026-03-12 16:04:17 +0000 UTC" firstStartedPulling="2026-03-12 16:04:19.038771862 +0000 UTC m=+108.002734206" lastFinishedPulling="2026-03-12 16:05:20.723574628 +0000 UTC m=+169.687536972" observedRunningTime="2026-03-12 16:05:21.167192704 +0000 UTC m=+170.131155058" watchObservedRunningTime="2026-03-12 16:05:21.167309337 +0000 UTC m=+170.131271681" Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.191763 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glm9k" podStartSLOduration=2.513368063 podStartE2EDuration="1m2.191748318s" podCreationTimestamp="2026-03-12 16:04:19 +0000 UTC" firstStartedPulling="2026-03-12 16:04:21.149381328 +0000 UTC m=+110.113343672" lastFinishedPulling="2026-03-12 16:05:20.827761583 +0000 UTC m=+169.791723927" observedRunningTime="2026-03-12 16:05:21.191064539 +0000 UTC m=+170.155026903" watchObservedRunningTime="2026-03-12 16:05:21.191748318 +0000 UTC m=+170.155710662" Mar 12 16:05:21 crc kubenswrapper[4687]: I0312 16:05:21.211393 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vzpsd" podStartSLOduration=2.739377878 podStartE2EDuration="1m2.211351347s" podCreationTimestamp="2026-03-12 16:04:19 +0000 UTC" firstStartedPulling="2026-03-12 16:04:21.176042288 +0000 UTC m=+110.140004632" lastFinishedPulling="2026-03-12 16:05:20.648015757 +0000 UTC m=+169.611978101" observedRunningTime="2026-03-12 16:05:21.208634214 +0000 UTC m=+170.172596548" watchObservedRunningTime="2026-03-12 16:05:21.211351347 +0000 UTC m=+170.175313691" Mar 12 16:05:22 crc kubenswrapper[4687]: I0312 16:05:22.067997 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8z8d" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="registry-server" probeResult="failure" output=< Mar 12 16:05:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:05:22 crc kubenswrapper[4687]: > Mar 12 16:05:22 crc kubenswrapper[4687]: I0312 16:05:22.137910 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerStarted","Data":"219b3155f8c3ed93c4a3d94775118d5871d856da5a493b4ad7186368c574c9b7"} Mar 12 16:05:27 crc kubenswrapper[4687]: I0312 16:05:27.808685 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:05:27 crc kubenswrapper[4687]: I0312 16:05:27.810231 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:05:27 crc kubenswrapper[4687]: I0312 16:05:27.856510 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:05:27 crc kubenswrapper[4687]: I0312 16:05:27.876275 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wprbg" podStartSLOduration=8.431729246 podStartE2EDuration="1m9.87625589s" podCreationTimestamp="2026-03-12 16:04:18 +0000 UTC" firstStartedPulling="2026-03-12 16:04:20.129984755 +0000 UTC m=+109.093947099" lastFinishedPulling="2026-03-12 16:05:21.574511399 +0000 UTC m=+170.538473743" observedRunningTime="2026-03-12 16:05:22.158104337 +0000 UTC m=+171.122066691" watchObservedRunningTime="2026-03-12 16:05:27.87625589 +0000 UTC m=+176.840218234" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.018147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.018202 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.064439 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.232153 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.236339 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.260559 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.260632 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.324570 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.419914 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.420426 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:05:28 crc kubenswrapper[4687]: I0312 16:05:28.476185 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:05:29 crc kubenswrapper[4687]: I0312 16:05:29.238281 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:05:29 crc kubenswrapper[4687]: I0312 16:05:29.247322 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:05:29 crc kubenswrapper[4687]: I0312 16:05:29.833405 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:05:29 crc kubenswrapper[4687]: I0312 16:05:29.833485 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:05:29 crc kubenswrapper[4687]: I0312 16:05:29.907591 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:05:30 crc kubenswrapper[4687]: I0312 16:05:30.013911 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6n92"] Mar 12 16:05:30 crc kubenswrapper[4687]: I0312 16:05:30.248393 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:05:30 crc kubenswrapper[4687]: I0312 16:05:30.248459 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:05:30 crc kubenswrapper[4687]: I0312 16:05:30.255971 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:05:30 crc kubenswrapper[4687]: I0312 16:05:30.340497 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:05:30 crc kubenswrapper[4687]: I0312 16:05:30.618583 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wprbg"] Mar 12 16:05:31 crc kubenswrapper[4687]: I0312 16:05:31.073508 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:05:31 crc kubenswrapper[4687]: I0312 16:05:31.122497 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:05:31 crc kubenswrapper[4687]: I0312 16:05:31.197489 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6n92" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="registry-server" containerID="cri-o://8c18c4d8a28bd8d7eff11888601183a8a1e0a49b7c6d863838115b5c0a51d983" gracePeriod=2 Mar 12 16:05:31 crc kubenswrapper[4687]: I0312 16:05:31.236211 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.211293 4687 generic.go:334] "Generic (PLEG): container finished" podID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerID="8c18c4d8a28bd8d7eff11888601183a8a1e0a49b7c6d863838115b5c0a51d983" exitCode=0 Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.211337 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6n92" event={"ID":"6af9f3dd-14a8-4a00-b056-4d655e17b3f4","Type":"ContainerDied","Data":"8c18c4d8a28bd8d7eff11888601183a8a1e0a49b7c6d863838115b5c0a51d983"} Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.211851 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wprbg" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="registry-server" containerID="cri-o://219b3155f8c3ed93c4a3d94775118d5871d856da5a493b4ad7186368c574c9b7" gracePeriod=2 Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.423280 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glm9k"] Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.613265 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.776274 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-catalog-content\") pod \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.776377 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-utilities\") pod \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.776466 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l245c\" (UniqueName: \"kubernetes.io/projected/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-kube-api-access-l245c\") pod \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\" (UID: \"6af9f3dd-14a8-4a00-b056-4d655e17b3f4\") " Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.777441 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-utilities" (OuterVolumeSpecName: "utilities") pod "6af9f3dd-14a8-4a00-b056-4d655e17b3f4" (UID: "6af9f3dd-14a8-4a00-b056-4d655e17b3f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.781447 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-kube-api-access-l245c" (OuterVolumeSpecName: "kube-api-access-l245c") pod "6af9f3dd-14a8-4a00-b056-4d655e17b3f4" (UID: "6af9f3dd-14a8-4a00-b056-4d655e17b3f4"). InnerVolumeSpecName "kube-api-access-l245c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.834318 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6af9f3dd-14a8-4a00-b056-4d655e17b3f4" (UID: "6af9f3dd-14a8-4a00-b056-4d655e17b3f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.878295 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.878333 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:32 crc kubenswrapper[4687]: I0312 16:05:32.878344 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l245c\" (UniqueName: \"kubernetes.io/projected/6af9f3dd-14a8-4a00-b056-4d655e17b3f4-kube-api-access-l245c\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.217761 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6n92" event={"ID":"6af9f3dd-14a8-4a00-b056-4d655e17b3f4","Type":"ContainerDied","Data":"ebaeef191f0877d1d414ab6639ffb6c11954e1e9dc923d3c2da456c5d2572f53"} Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.217771 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6n92" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.217812 4687 scope.go:117] "RemoveContainer" containerID="8c18c4d8a28bd8d7eff11888601183a8a1e0a49b7c6d863838115b5c0a51d983" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.220144 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerID="219b3155f8c3ed93c4a3d94775118d5871d856da5a493b4ad7186368c574c9b7" exitCode=0 Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.220228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerDied","Data":"219b3155f8c3ed93c4a3d94775118d5871d856da5a493b4ad7186368c574c9b7"} Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.220403 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-glm9k" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="registry-server" containerID="cri-o://53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c" gracePeriod=2 Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.231997 4687 scope.go:117] "RemoveContainer" containerID="cc240b33be28efd31510f8d62f4d12d6090cf957f2000286d661551605934e8a" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.246065 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6n92"] Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.247323 4687 scope.go:117] "RemoveContainer" containerID="ab39192f516ed0c902d0baa4380dfc3639764d1e7edc20970200bada7bfb2359" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.254739 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6n92"] Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.494306 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.663130 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.677736 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q8mpl"] Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.688336 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-utilities\") pod \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.688465 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgg4t\" (UniqueName: \"kubernetes.io/projected/f4e09ff9-9c13-4757-b36b-5a495caf6f07-kube-api-access-wgg4t\") pod \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.688506 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4rq\" (UniqueName: \"kubernetes.io/projected/be373874-6fcd-41ac-ba5a-3090a71a145f-kube-api-access-dx4rq\") pod \"be373874-6fcd-41ac-ba5a-3090a71a145f\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.688529 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-utilities\") pod \"be373874-6fcd-41ac-ba5a-3090a71a145f\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.688561 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-catalog-content\") pod \"be373874-6fcd-41ac-ba5a-3090a71a145f\" (UID: \"be373874-6fcd-41ac-ba5a-3090a71a145f\") " Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.688584 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-catalog-content\") pod \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\" (UID: \"f4e09ff9-9c13-4757-b36b-5a495caf6f07\") " Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.690531 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-utilities" (OuterVolumeSpecName: "utilities") pod "be373874-6fcd-41ac-ba5a-3090a71a145f" (UID: "be373874-6fcd-41ac-ba5a-3090a71a145f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.690731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-utilities" (OuterVolumeSpecName: "utilities") pod "f4e09ff9-9c13-4757-b36b-5a495caf6f07" (UID: "f4e09ff9-9c13-4757-b36b-5a495caf6f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.734493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be373874-6fcd-41ac-ba5a-3090a71a145f" (UID: "be373874-6fcd-41ac-ba5a-3090a71a145f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.743496 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be373874-6fcd-41ac-ba5a-3090a71a145f-kube-api-access-dx4rq" (OuterVolumeSpecName: "kube-api-access-dx4rq") pod "be373874-6fcd-41ac-ba5a-3090a71a145f" (UID: "be373874-6fcd-41ac-ba5a-3090a71a145f"). InnerVolumeSpecName "kube-api-access-dx4rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.746237 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e09ff9-9c13-4757-b36b-5a495caf6f07-kube-api-access-wgg4t" (OuterVolumeSpecName: "kube-api-access-wgg4t") pod "f4e09ff9-9c13-4757-b36b-5a495caf6f07" (UID: "f4e09ff9-9c13-4757-b36b-5a495caf6f07"). InnerVolumeSpecName "kube-api-access-wgg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.746663 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" path="/var/lib/kubelet/pods/6af9f3dd-14a8-4a00-b056-4d655e17b3f4/volumes" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.789918 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgg4t\" (UniqueName: \"kubernetes.io/projected/f4e09ff9-9c13-4757-b36b-5a495caf6f07-kube-api-access-wgg4t\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.789946 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4rq\" (UniqueName: \"kubernetes.io/projected/be373874-6fcd-41ac-ba5a-3090a71a145f-kube-api-access-dx4rq\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.789958 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.789968 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be373874-6fcd-41ac-ba5a-3090a71a145f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.789979 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.801694 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e09ff9-9c13-4757-b36b-5a495caf6f07" (UID: "f4e09ff9-9c13-4757-b36b-5a495caf6f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.890743 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e09ff9-9c13-4757-b36b-5a495caf6f07-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.940754 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc74486c8-bztwn"] Mar 12 16:05:33 crc kubenswrapper[4687]: I0312 16:05:33.941404 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" podUID="38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" containerName="controller-manager" containerID="cri-o://31cf6ba18bef2e494d2db8c9a355c0af266cf71802456ee4df5ad98f74f3c23f" gracePeriod=30 Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.026119 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl"] Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.026324 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" podUID="0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" containerName="route-controller-manager" containerID="cri-o://1891d539638dc088f226fcdf6ddd503c78460ce1be8a20c9356870cf0d822db8" gracePeriod=30 Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.230323 4687 generic.go:334] "Generic (PLEG): container finished" podID="38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" containerID="31cf6ba18bef2e494d2db8c9a355c0af266cf71802456ee4df5ad98f74f3c23f" exitCode=0 Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.230471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" event={"ID":"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18","Type":"ContainerDied","Data":"31cf6ba18bef2e494d2db8c9a355c0af266cf71802456ee4df5ad98f74f3c23f"} Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.232517 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wprbg" event={"ID":"f4e09ff9-9c13-4757-b36b-5a495caf6f07","Type":"ContainerDied","Data":"8d55310491677d0527051d90632c9bf29a1ec9d5e97fc3d709fdb84f2aea405d"} Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.232547 4687 scope.go:117] "RemoveContainer" containerID="219b3155f8c3ed93c4a3d94775118d5871d856da5a493b4ad7186368c574c9b7" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.232650 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wprbg" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.240993 4687 generic.go:334] "Generic (PLEG): container finished" podID="0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" containerID="1891d539638dc088f226fcdf6ddd503c78460ce1be8a20c9356870cf0d822db8" exitCode=0 Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.241058 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" event={"ID":"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b","Type":"ContainerDied","Data":"1891d539638dc088f226fcdf6ddd503c78460ce1be8a20c9356870cf0d822db8"} Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.244217 4687 generic.go:334] "Generic (PLEG): container finished" podID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerID="53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c" exitCode=0 Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.244250 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glm9k" event={"ID":"be373874-6fcd-41ac-ba5a-3090a71a145f","Type":"ContainerDied","Data":"53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c"} Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.244269 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glm9k" event={"ID":"be373874-6fcd-41ac-ba5a-3090a71a145f","Type":"ContainerDied","Data":"718050d5c190dcfbdd1090e806e702c8e0c5fe6b4744baeaa49c972f9bac8b67"} Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.244326 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glm9k" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.259795 4687 scope.go:117] "RemoveContainer" containerID="190f56e019824d8294e9b15c8942da58639c0dac502a4e5a4a18710c9d057785" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.265659 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glm9k"] Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.269663 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-glm9k"] Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.287201 4687 scope.go:117] "RemoveContainer" containerID="fe43ac87f41c5417fa227a97e8af181b72572f6c3c5ff9943be20152d0fefbac" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.291014 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wprbg"] Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.297165 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wprbg"] Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.325646 4687 scope.go:117] "RemoveContainer" containerID="53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.347702 4687 scope.go:117] "RemoveContainer" containerID="4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.361906 4687 scope.go:117] "RemoveContainer" containerID="3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.371830 4687 scope.go:117] "RemoveContainer" containerID="53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c" Mar 12 16:05:34 crc kubenswrapper[4687]: E0312 16:05:34.372126 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c\": container with ID starting with 53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c not found: ID does not exist" containerID="53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.372156 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c"} err="failed to get container status \"53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c\": rpc error: code = NotFound desc = could not find container \"53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c\": container with ID starting with 53c659d2cd1b70265d8935dd4db6034276fbb02cc99ed1d71976a3eb1006371c not found: ID does not exist" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.372177 4687 scope.go:117] "RemoveContainer" containerID="4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507" Mar 12 16:05:34 crc kubenswrapper[4687]: E0312 16:05:34.372381 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507\": container with ID starting with 4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507 not found: ID does not exist" containerID="4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.372403 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507"} err="failed to get container status \"4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507\": rpc error: code = NotFound desc = could not find container \"4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507\": container with ID starting with 4f3fa6a4cbd1cb27a822db000b75f2d5a11b60cd444d3980dc20e23136b3a507 not found: ID does not exist" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.372420 4687 scope.go:117] "RemoveContainer" containerID="3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606" Mar 12 16:05:34 crc kubenswrapper[4687]: E0312 16:05:34.372607 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606\": container with ID starting with 3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606 not found: ID does not exist" containerID="3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.372630 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606"} err="failed to get container status \"3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606\": rpc error: code = NotFound desc = could not find container \"3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606\": container with ID starting with 3ded160db2c12e740cf6aef52ceb47195120384247362aca067fd48239f04606 not found: ID does not exist" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.405255 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.431572 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-serving-cert\") pod \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599126 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6knd\" (UniqueName: \"kubernetes.io/projected/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-kube-api-access-c6knd\") pod \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599168 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-proxy-ca-bundles\") pod \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599191 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-config\") pod \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599233 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-config\") pod \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-client-ca\") pod \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\" (UID: \"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lghr9\" (UniqueName: \"kubernetes.io/projected/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-kube-api-access-lghr9\") pod \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599289 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-client-ca\") pod \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.599311 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-serving-cert\") pod \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\" (UID: \"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b\") " Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600153 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-client-ca" (OuterVolumeSpecName: "client-ca") pod "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" (UID: "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600172 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" (UID: "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600177 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-config" (OuterVolumeSpecName: "config") pod "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" (UID: "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600252 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-config" (OuterVolumeSpecName: "config") pod "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" (UID: "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600480 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" (UID: "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600795 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600820 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600831 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600839 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.600847 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.604191 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" (UID: "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.604317 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" (UID: "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.605495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-kube-api-access-c6knd" (OuterVolumeSpecName: "kube-api-access-c6knd") pod "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" (UID: "38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18"). InnerVolumeSpecName "kube-api-access-c6knd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.605532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-kube-api-access-lghr9" (OuterVolumeSpecName: "kube-api-access-lghr9") pod "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" (UID: "0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b"). InnerVolumeSpecName "kube-api-access-lghr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.701633 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6knd\" (UniqueName: \"kubernetes.io/projected/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-kube-api-access-c6knd\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.701671 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lghr9\" (UniqueName: \"kubernetes.io/projected/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-kube-api-access-lghr9\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.701682 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:34 crc kubenswrapper[4687]: I0312 16:05:34.701691 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.257376 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" event={"ID":"0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b","Type":"ContainerDied","Data":"75607e2198defeed3caf3a82f79d871bd0d862ca24416648887f51cbb29e0f01"} Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.257437 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.257448 4687 scope.go:117] "RemoveContainer" containerID="1891d539638dc088f226fcdf6ddd503c78460ce1be8a20c9356870cf0d822db8" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.264168 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" event={"ID":"38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18","Type":"ContainerDied","Data":"6dd09beb932477983bbaeacf48bf8e8bcf6455086ae651eaee09729f752e3af0"} Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.264173 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc74486c8-bztwn" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.280428 4687 scope.go:117] "RemoveContainer" containerID="31cf6ba18bef2e494d2db8c9a355c0af266cf71802456ee4df5ad98f74f3c23f" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.289764 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl"] Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.293756 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dccfdd746-ft2hl"] Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.308033 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc74486c8-bztwn"] Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.310803 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cc74486c8-bztwn"] Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.741407 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" path="/var/lib/kubelet/pods/0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b/volumes" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.741904 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" path="/var/lib/kubelet/pods/38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18/volumes" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.742313 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" path="/var/lib/kubelet/pods/be373874-6fcd-41ac-ba5a-3090a71a145f/volumes" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.743234 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" path="/var/lib/kubelet/pods/f4e09ff9-9c13-4757-b36b-5a495caf6f07/volumes" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.842763 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7989f78bdf-zgktt"] Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843350 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843402 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843425 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="extract-content" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843439 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="extract-content" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843459 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="extract-utilities" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843472 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="extract-utilities" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843490 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="extract-utilities" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843502 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="extract-utilities" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843520 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" containerName="controller-manager" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843532 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" containerName="controller-manager" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843553 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" containerName="route-controller-manager" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843565 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" containerName="route-controller-manager" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843594 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="extract-utilities" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843606 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="extract-utilities" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843624 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843635 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843654 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843665 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843681 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="extract-content" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843695 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="extract-content" Mar 12 16:05:35 crc kubenswrapper[4687]: E0312 16:05:35.843715 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="extract-content" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843727 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="extract-content" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843897 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af9f3dd-14a8-4a00-b056-4d655e17b3f4" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843919 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="be373874-6fcd-41ac-ba5a-3090a71a145f" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843939 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e09ff9-9c13-4757-b36b-5a495caf6f07" containerName="registry-server" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843962 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d8a971-f2e0-41b5-8a80-d8bf2bfd7f18" containerName="controller-manager" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.843982 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffa8b25-8b1d-42e0-99ab-ad6a8b8d550b" containerName="route-controller-manager" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.846135 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f"] Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.846324 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.848904 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.849049 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.849061 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.849197 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.849279 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.849598 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.853620 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.855518 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f"] Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.860035 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.860067 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.860210 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.860446 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.860831 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.861727 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.864800 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:05:35 crc kubenswrapper[4687]: I0312 16:05:35.870705 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7989f78bdf-zgktt"] Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.017318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-config\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.017383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82495e4-c994-496e-9813-d6c64e65caba-serving-cert\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.017419 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-client-ca\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.017863 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8sx\" (UniqueName: \"kubernetes.io/projected/b9f7020e-502f-4991-b432-21db9ae6af07-kube-api-access-ds8sx\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.017929 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-proxy-ca-bundles\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.018025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-client-ca\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.018331 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-config\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.018562 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvlnx\" (UniqueName: \"kubernetes.io/projected/b82495e4-c994-496e-9813-d6c64e65caba-kube-api-access-fvlnx\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.018661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f7020e-502f-4991-b432-21db9ae6af07-serving-cert\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.119698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-config\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.119744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82495e4-c994-496e-9813-d6c64e65caba-serving-cert\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.119770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-client-ca\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.119794 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds8sx\" (UniqueName: \"kubernetes.io/projected/b9f7020e-502f-4991-b432-21db9ae6af07-kube-api-access-ds8sx\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.119818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-proxy-ca-bundles\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.120199 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-client-ca\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.120232 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-config\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.121161 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvlnx\" (UniqueName: \"kubernetes.io/projected/b82495e4-c994-496e-9813-d6c64e65caba-kube-api-access-fvlnx\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.121184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f7020e-502f-4991-b432-21db9ae6af07-serving-cert\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.121077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-client-ca\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.120903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-client-ca\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.121424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-config\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.121542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-proxy-ca-bundles\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.122225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-config\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.124963 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f7020e-502f-4991-b432-21db9ae6af07-serving-cert\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.126824 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82495e4-c994-496e-9813-d6c64e65caba-serving-cert\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.145698 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds8sx\" (UniqueName: \"kubernetes.io/projected/b9f7020e-502f-4991-b432-21db9ae6af07-kube-api-access-ds8sx\") pod \"route-controller-manager-6dccfd4cdc-wsv7f\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.151614 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvlnx\" (UniqueName: \"kubernetes.io/projected/b82495e4-c994-496e-9813-d6c64e65caba-kube-api-access-fvlnx\") pod \"controller-manager-7989f78bdf-zgktt\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.173910 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.188464 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.396264 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7989f78bdf-zgktt"] Mar 12 16:05:36 crc kubenswrapper[4687]: W0312 16:05:36.400630 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82495e4_c994_496e_9813_d6c64e65caba.slice/crio-94cdc458dfd0a883c4da27299bbd2157b8abea11bb763a4bc561e224b891b673 WatchSource:0}: Error finding container 94cdc458dfd0a883c4da27299bbd2157b8abea11bb763a4bc561e224b891b673: Status 404 returned error can't find the container with id 94cdc458dfd0a883c4da27299bbd2157b8abea11bb763a4bc561e224b891b673 Mar 12 16:05:36 crc kubenswrapper[4687]: I0312 16:05:36.447449 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f"] Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.282049 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" event={"ID":"b9f7020e-502f-4991-b432-21db9ae6af07","Type":"ContainerStarted","Data":"8ddfc499c53ac77be62d2bc7e496ba0ba148a792452edf68059a7bd0876c84c4"} Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.282121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" event={"ID":"b9f7020e-502f-4991-b432-21db9ae6af07","Type":"ContainerStarted","Data":"7d883d1b9cd54957721b85e8224b5a36818f7a68b093484396b1afe317c68530"} Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.282505 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.283447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" event={"ID":"b82495e4-c994-496e-9813-d6c64e65caba","Type":"ContainerStarted","Data":"7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec"} Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.283472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" event={"ID":"b82495e4-c994-496e-9813-d6c64e65caba","Type":"ContainerStarted","Data":"94cdc458dfd0a883c4da27299bbd2157b8abea11bb763a4bc561e224b891b673"} Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.284143 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.292729 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.292794 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.306303 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" podStartSLOduration=3.306284783 podStartE2EDuration="3.306284783s" podCreationTimestamp="2026-03-12 16:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:37.303042646 +0000 UTC m=+186.267004990" watchObservedRunningTime="2026-03-12 16:05:37.306284783 +0000 UTC m=+186.270247127" Mar 12 16:05:37 crc kubenswrapper[4687]: I0312 16:05:37.374191 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" podStartSLOduration=4.374173398 podStartE2EDuration="4.374173398s" podCreationTimestamp="2026-03-12 16:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:05:37.373006476 +0000 UTC m=+186.336968820" watchObservedRunningTime="2026-03-12 16:05:37.374173398 +0000 UTC m=+186.338135742" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.170856 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.171739 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.171858 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.172040 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128" gracePeriod=15 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.172147 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d" gracePeriod=15 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.172165 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f" gracePeriod=15 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.172195 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8" gracePeriod=15 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.172291 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708" gracePeriod=15 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174220 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174494 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174515 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174532 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174543 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174562 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174574 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174598 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174610 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174625 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174636 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174654 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174665 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174679 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174690 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.174708 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174719 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174877 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174890 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174907 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174920 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174934 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174945 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174966 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.174980 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.175166 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.175184 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.175202 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.175213 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.175400 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.202705 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307161 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307246 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307308 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307338 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307380 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307429 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.307443 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.351671 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.353012 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.354133 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f" exitCode=0 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.354150 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8" exitCode=2 Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408135 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408153 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408148 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408175 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408204 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408234 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408245 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408292 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.408352 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: I0312 16:05:48.504310 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:05:48 crc kubenswrapper[4687]: E0312 16:05:48.527213 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c23a5dae91855 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:05:48.526418005 +0000 UTC m=+197.490380349,LastTimestamp:2026-03-12 16:05:48.526418005 +0000 UTC m=+197.490380349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.183014 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.183558 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.361726 4687 generic.go:334] "Generic (PLEG): container finished" podID="296d3d6d-ef43-4965-a804-d1bb0c126d64" containerID="e93741b23d08ac5a9ffe5d11b0a09c11346683d4ddf9df96657e37a95b9783b3" exitCode=0 Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.361815 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"296d3d6d-ef43-4965-a804-d1bb0c126d64","Type":"ContainerDied","Data":"e93741b23d08ac5a9ffe5d11b0a09c11346683d4ddf9df96657e37a95b9783b3"} Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.362628 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.363060 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.363793 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.366039 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.367815 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.368828 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d" exitCode=0 Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.368852 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708" exitCode=0 Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.368897 4687 scope.go:117] "RemoveContainer" containerID="d6b3f7862e29ba61d29d1f6faf5c7e9edd154254af17295668fa1f8170f5d183" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.372695 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5f0f99fff29f6fb9e8567dde5e7f46a9649e658b8ca0c7d1b015f9732832b675"} Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.372747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5099011cae5a65dd684ecb71b4f71f21b06ca299aaa89c4c6075e87796e9a3d4"} Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.374106 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.374520 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:49 crc kubenswrapper[4687]: I0312 16:05:49.374894 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.380954 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.544726 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.545721 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.546314 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.546633 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.546874 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.637521 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.637585 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.637620 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.637797 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.637826 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.637842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.737640 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.738327 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.738766 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.739092 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.739580 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.739630 4687 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.739650 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.840608 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-kubelet-dir\") pod \"296d3d6d-ef43-4965-a804-d1bb0c126d64\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.840707 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-var-lock\") pod \"296d3d6d-ef43-4965-a804-d1bb0c126d64\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.840781 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/296d3d6d-ef43-4965-a804-d1bb0c126d64-kube-api-access\") pod \"296d3d6d-ef43-4965-a804-d1bb0c126d64\" (UID: \"296d3d6d-ef43-4965-a804-d1bb0c126d64\") " Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.840861 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "296d3d6d-ef43-4965-a804-d1bb0c126d64" (UID: "296d3d6d-ef43-4965-a804-d1bb0c126d64"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.840939 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-var-lock" (OuterVolumeSpecName: "var-lock") pod "296d3d6d-ef43-4965-a804-d1bb0c126d64" (UID: "296d3d6d-ef43-4965-a804-d1bb0c126d64"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.841145 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.841180 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/296d3d6d-ef43-4965-a804-d1bb0c126d64-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.845921 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296d3d6d-ef43-4965-a804-d1bb0c126d64-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "296d3d6d-ef43-4965-a804-d1bb0c126d64" (UID: "296d3d6d-ef43-4965-a804-d1bb0c126d64"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:50 crc kubenswrapper[4687]: I0312 16:05:50.942541 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/296d3d6d-ef43-4965-a804-d1bb0c126d64-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.390833 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.391650 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128" exitCode=0 Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.391788 4687 scope.go:117] "RemoveContainer" containerID="02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.391806 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.393518 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"296d3d6d-ef43-4965-a804-d1bb0c126d64","Type":"ContainerDied","Data":"1c1bf4af36c153483faa5de3ebbf60da398e29b1b172434ead150376c0df6205"} Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.393555 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1bf4af36c153483faa5de3ebbf60da398e29b1b172434ead150376c0df6205" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.393577 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.412030 4687 scope.go:117] "RemoveContainer" containerID="bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.413292 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.413587 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.413832 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.417608 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.417825 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.418108 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.426563 4687 scope.go:117] "RemoveContainer" containerID="9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.439526 4687 scope.go:117] "RemoveContainer" containerID="13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.451866 4687 scope.go:117] "RemoveContainer" containerID="b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.471622 4687 scope.go:117] "RemoveContainer" containerID="8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.491043 4687 scope.go:117] "RemoveContainer" containerID="02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d" Mar 12 16:05:51 crc kubenswrapper[4687]: E0312 16:05:51.491702 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d\": container with ID starting with 02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d not found: ID does not exist" containerID="02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.491862 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d"} err="failed to get container status \"02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d\": rpc error: code = NotFound desc = could not find container \"02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d\": container with ID starting with 02b26314f7fb80ac6345d675812eb1fa68d43b4996d63f3d4fbe129abc307e1d not found: ID does not exist" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.491910 4687 scope.go:117] "RemoveContainer" containerID="bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f" Mar 12 16:05:51 crc kubenswrapper[4687]: E0312 16:05:51.492868 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\": container with ID starting with bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f not found: ID does not exist" containerID="bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.492916 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f"} err="failed to get container status \"bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\": rpc error: code = NotFound desc = could not find container \"bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f\": container with ID starting with bb55eccc7223b23a2c10e557c1177e701ded9427d784dc9f7009d5e4467e158f not found: ID does not exist" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.492950 4687 scope.go:117] "RemoveContainer" containerID="9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708" Mar 12 16:05:51 crc kubenswrapper[4687]: E0312 16:05:51.493273 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\": container with ID starting with 9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708 not found: ID does not exist" containerID="9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.493304 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708"} err="failed to get container status \"9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\": rpc error: code = NotFound desc = could not find container \"9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708\": container with ID starting with 9a34c78cd80968a4a6a919e5da94a7ff7c74bb0c46da37822d2889adbc3fb708 not found: ID does not exist" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.493325 4687 scope.go:117] "RemoveContainer" containerID="13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8" Mar 12 16:05:51 crc kubenswrapper[4687]: E0312 16:05:51.493646 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\": container with ID starting with 13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8 not found: ID does not exist" containerID="13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.493662 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8"} err="failed to get container status \"13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\": rpc error: code = NotFound desc = could not find container \"13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8\": container with ID starting with 13dcd8373b3a318a301ac496ab796867ede2a8da7155cd5bfb835e49d69889c8 not found: ID does not exist" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.493680 4687 scope.go:117] "RemoveContainer" containerID="b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128" Mar 12 16:05:51 crc kubenswrapper[4687]: E0312 16:05:51.493965 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\": container with ID starting with b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128 not found: ID does not exist" containerID="b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.493991 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128"} err="failed to get container status \"b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\": rpc error: code = NotFound desc = could not find container \"b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128\": container with ID starting with b35976d31a6fdfb31ed2346ff865da30da6a5810423e041c80a8695921a49128 not found: ID does not exist" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.494006 4687 scope.go:117] "RemoveContainer" containerID="8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6" Mar 12 16:05:51 crc kubenswrapper[4687]: E0312 16:05:51.495774 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\": container with ID starting with 8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6 not found: ID does not exist" containerID="8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.495802 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6"} err="failed to get container status \"8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\": rpc error: code = NotFound desc = could not find container \"8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6\": container with ID starting with 8d5af768ce671aec1a4d7bb8bf91e340367f7aa7bfef427830a73299621f75a6 not found: ID does not exist" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.734302 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.734574 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.734851 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:51 crc kubenswrapper[4687]: I0312 16:05:51.742914 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 12 16:05:52 crc kubenswrapper[4687]: E0312 16:05:52.869167 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c23a5dae91855 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 16:05:48.526418005 +0000 UTC m=+197.490380349,LastTimestamp:2026-03-12 16:05:48.526418005 +0000 UTC m=+197.490380349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.076483 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.077694 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.077954 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.078178 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.078415 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:54 crc kubenswrapper[4687]: I0312 16:05:54.078443 4687 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.078761 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.279498 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Mar 12 16:05:54 crc kubenswrapper[4687]: E0312 16:05:54.680455 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Mar 12 16:05:55 crc kubenswrapper[4687]: E0312 16:05:55.481196 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Mar 12 16:05:57 crc kubenswrapper[4687]: E0312 16:05:57.083800 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Mar 12 16:05:58 crc kubenswrapper[4687]: I0312 16:05:58.752235 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" containerName="oauth-openshift" containerID="cri-o://6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4" gracePeriod=15 Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.334243 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.334896 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.335273 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.335576 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.449745 4687 generic.go:334] "Generic (PLEG): container finished" podID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" containerID="6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4" exitCode=0 Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.449805 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.449794 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" event={"ID":"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb","Type":"ContainerDied","Data":"6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4"} Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.449942 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" event={"ID":"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb","Type":"ContainerDied","Data":"a32ba040757c2158a33477ea6e4d44f59c0229189575b97c1d63675b48dfc6bc"} Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.449962 4687 scope.go:117] "RemoveContainer" containerID="6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.450555 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.451052 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.451654 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456001 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-login\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456068 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-idp-0-file-data\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456095 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-serving-cert\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456118 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-provider-selection\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456151 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-cliconfig\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456199 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-session\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456221 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-router-certs\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456560 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfb2l\" (UniqueName: \"kubernetes.io/projected/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-kube-api-access-zfb2l\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456613 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-ocp-branding-template\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456631 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-policies\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.456896 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.457310 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-error\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.457344 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-trusted-ca-bundle\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.457604 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-service-ca\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.457631 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-dir\") pod \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\" (UID: \"df2ffa76-a50a-4e78-994b-1d7fd6a83ddb\") " Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.457899 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.457938 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.458044 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.458052 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.458253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.461476 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.462867 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.463022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-kube-api-access-zfb2l" (OuterVolumeSpecName: "kube-api-access-zfb2l") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "kube-api-access-zfb2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.463089 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.464000 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.472073 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.472226 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.472731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.474659 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" (UID: "df2ffa76-a50a-4e78-994b-1d7fd6a83ddb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.481673 4687 scope.go:117] "RemoveContainer" containerID="6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4" Mar 12 16:05:59 crc kubenswrapper[4687]: E0312 16:05:59.482324 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4\": container with ID starting with 6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4 not found: ID does not exist" containerID="6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.482371 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4"} err="failed to get container status \"6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4\": rpc error: code = NotFound desc = could not find container \"6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4\": container with ID starting with 6e677d134a625d29a8c71fd6723d135307376f5f52b9579a510087d561381df4 not found: ID does not exist" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559144 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559199 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559227 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559251 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559270 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559288 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfb2l\" (UniqueName: \"kubernetes.io/projected/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-kube-api-access-zfb2l\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559308 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559325 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559342 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559390 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559408 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559426 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.559443 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.770792 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.771218 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:05:59 crc kubenswrapper[4687]: I0312 16:05:59.771629 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:00 crc kubenswrapper[4687]: E0312 16:06:00.285945 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="6.4s" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.732612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.733252 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.733702 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.734048 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.747433 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.747466 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:00 crc kubenswrapper[4687]: E0312 16:06:00.748006 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:00 crc kubenswrapper[4687]: I0312 16:06:00.748592 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.464192 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.465146 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.465209 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7f437980117f5d03ea8163287190b3dbb86c3c9545a22d9e2e4bfe7abae9300d" exitCode=1 Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.465292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7f437980117f5d03ea8163287190b3dbb86c3c9545a22d9e2e4bfe7abae9300d"} Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.465856 4687 scope.go:117] "RemoveContainer" containerID="7f437980117f5d03ea8163287190b3dbb86c3c9545a22d9e2e4bfe7abae9300d" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.466092 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.466470 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.466711 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.466896 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.467854 4687 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2e6fb20a6a5c310880047ae37d315f7caef1c6fa6cf97126fd818c370b59723d" exitCode=0 Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.467882 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2e6fb20a6a5c310880047ae37d315f7caef1c6fa6cf97126fd818c370b59723d"} Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.467908 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df805a245270e0916dd7064301adc4528a4a13901dd1ca2a3e333ed5875bdf6c"} Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.468128 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.468139 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:01 crc kubenswrapper[4687]: E0312 16:06:01.468409 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.468465 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.468877 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.469396 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.469907 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.736496 4687 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.737635 4687 status_manager.go:851] "Failed to get status for pod" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" pod="openshift-authentication/oauth-openshift-558db77b4-q8mpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q8mpl\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.738129 4687 status_manager.go:851] "Failed to get status for pod" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.738462 4687 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: I0312 16:06:01.738896 4687 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Mar 12 16:06:01 crc kubenswrapper[4687]: E0312 16:06:01.795294 4687 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" volumeName="registry-storage" Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.112240 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.477680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"701e0ac528929452ce3c511081b8ed807557d32056f7bf43257ecf5109d565a1"} Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.477757 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b4a5f86f7c0aa9dc19e4f371e8b3b946c9fc7f421a818ac2ee6ed55b81d2aaff"} Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.477793 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78217e8f56baee39f8492577d0b8ad90d37d7ba95380d1e7874867fa60348a95"} Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.477804 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5a1c1f3f1df56f3bf2a06e07b76a6848e9f34ce8e028f53cafbaeb7571096a1"} Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.490473 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.491104 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 16:06:02 crc kubenswrapper[4687]: I0312 16:06:02.491169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ffe1bad6177828312ae387f3863013072ae2ccf765bcff5c2a404c7098c82fb"} Mar 12 16:06:03 crc kubenswrapper[4687]: I0312 16:06:03.499665 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"52448e64a886e76ff6bc618acc364f78adb89cc029d52648b0fe209474c3f7e8"} Mar 12 16:06:03 crc kubenswrapper[4687]: I0312 16:06:03.500334 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:03 crc kubenswrapper[4687]: I0312 16:06:03.500353 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:05 crc kubenswrapper[4687]: I0312 16:06:05.749016 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:05 crc kubenswrapper[4687]: I0312 16:06:05.749461 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:05 crc kubenswrapper[4687]: I0312 16:06:05.755307 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:08 crc kubenswrapper[4687]: I0312 16:06:08.512761 4687 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:08 crc kubenswrapper[4687]: I0312 16:06:08.564617 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="48f87c20-dd37-4187-b6ba-e508cd0ec6d0" Mar 12 16:06:09 crc kubenswrapper[4687]: I0312 16:06:09.532617 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:09 crc kubenswrapper[4687]: I0312 16:06:09.533433 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:09 crc kubenswrapper[4687]: I0312 16:06:09.533585 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:09 crc kubenswrapper[4687]: I0312 16:06:09.538308 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:10 crc kubenswrapper[4687]: I0312 16:06:10.537524 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:10 crc kubenswrapper[4687]: I0312 16:06:10.537550 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:11 crc kubenswrapper[4687]: I0312 16:06:11.123338 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:06:11 crc kubenswrapper[4687]: I0312 16:06:11.128098 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:06:11 crc kubenswrapper[4687]: I0312 16:06:11.542316 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:06:11 crc kubenswrapper[4687]: I0312 16:06:11.546667 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 16:06:11 crc kubenswrapper[4687]: I0312 16:06:11.755769 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="48f87c20-dd37-4187-b6ba-e508cd0ec6d0" Mar 12 16:06:14 crc kubenswrapper[4687]: I0312 16:06:14.131241 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:06:14 crc kubenswrapper[4687]: I0312 16:06:14.131799 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:06:18 crc kubenswrapper[4687]: I0312 16:06:18.604980 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:06:19 crc kubenswrapper[4687]: I0312 16:06:19.132245 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 16:06:19 crc kubenswrapper[4687]: I0312 16:06:19.783050 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 16:06:20 crc kubenswrapper[4687]: I0312 16:06:20.118226 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 16:06:20 crc kubenswrapper[4687]: I0312 16:06:20.867981 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.069340 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.134296 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.150173 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.476921 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.571266 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.587270 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.590246 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.693023 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.835787 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 16:06:21 crc kubenswrapper[4687]: I0312 16:06:21.962808 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.003966 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.064617 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.106495 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.374045 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.462563 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.506851 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.512784 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.685975 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.797964 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.829425 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.911633 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.938430 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 16:06:22 crc kubenswrapper[4687]: I0312 16:06:22.995128 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.098455 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.121522 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.183245 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.186994 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.237577 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.326814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.466024 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.490399 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.550559 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.755135 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.756824 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.788233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.824114 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.858596 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.878707 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 16:06:23 crc kubenswrapper[4687]: I0312 16:06:23.907669 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.027784 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.048077 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.095003 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.135572 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.145053 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.155152 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.164546 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.276620 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.300925 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.331057 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.342880 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.414427 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.482098 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.541766 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.549637 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.675432 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.678644 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.767323 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.780297 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.860394 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.906826 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 16:06:24 crc kubenswrapper[4687]: I0312 16:06:24.909455 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.037073 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.162598 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.176521 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.244158 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.313558 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.358190 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.388818 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.423271 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.447015 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.518119 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.542612 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.625434 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.653384 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.682184 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.690091 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.693435 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.696982 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.698281 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.744569 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.781001 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.784750 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.791498 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.799799 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.908856 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:06:25 crc kubenswrapper[4687]: I0312 16:06:25.917555 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.182313 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.189003 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.385726 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.431908 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.463272 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.487614 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.493483 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.516435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 16:06:26 crc kubenswrapper[4687]: I0312 16:06:26.601063 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.025651 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.027919 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.091598 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.148959 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.152341 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.200387 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.221076 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.250726 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.274334 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.287082 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.416399 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.504435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.614635 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.652397 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.752385 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.886820 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.904815 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.936232 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 16:06:27 crc kubenswrapper[4687]: I0312 16:06:27.994692 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.051991 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.091612 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.209624 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.230887 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.246501 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.259881 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.303682 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.460773 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.548132 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.552253 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.610607 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.626283 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.665154 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.666209 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.667336 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.728127 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.734069 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.735486 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.776145 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.815149 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 16:06:28 crc kubenswrapper[4687]: I0312 16:06:28.830410 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.000304 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.035117 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.153738 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.153792 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.205234 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.210737 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.231537 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.270106 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.308843 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.310768 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.395973 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.398945 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.419924 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.442215 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.516237 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.713422 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.744396 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.769752 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.869292 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.874216 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.939687 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:06:29 crc kubenswrapper[4687]: I0312 16:06:29.948707 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.185181 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.202166 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.295890 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.300129 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.408221 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.445175 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.469767 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.481346 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.564939 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.663432 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.684405 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.694598 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.707803 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.746533 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.773439 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.789500 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.808569 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.882631 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.946347 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 16:06:30 crc kubenswrapper[4687]: I0312 16:06:30.949386 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.096840 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.170301 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.288462 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.297012 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.441268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.624167 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.638006 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.677999 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.696780 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.714107 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.728315 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.821345 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.839062 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.881393 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.883275 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.904812 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 16:06:31 crc kubenswrapper[4687]: I0312 16:06:31.923236 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.038442 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.135137 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.184011 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.261907 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.296710 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.431097 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.503663 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.611224 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.698242 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.793675 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.822558 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.836320 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 16:06:32 crc kubenswrapper[4687]: I0312 16:06:32.897387 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.054425 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.326273 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.514149 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.556836 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.650449 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.719922 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.773748 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.820786 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.838922 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 16:06:33 crc kubenswrapper[4687]: I0312 16:06:33.974448 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.023196 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.087330 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.172165 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.172567 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.172601 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.300237 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.316246 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.320072 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.320047777 podStartE2EDuration="46.320047777s" podCreationTimestamp="2026-03-12 16:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:08.507160052 +0000 UTC m=+217.471122406" watchObservedRunningTime="2026-03-12 16:06:34.320047777 +0000 UTC m=+243.284010161" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.323352 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q8mpl","openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.323465 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd","openshift-infra/auto-csr-approver-29555526-j8xjw"] Mar 12 16:06:34 crc kubenswrapper[4687]: E0312 16:06:34.323759 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" containerName="installer" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.323792 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" containerName="installer" Mar 12 16:06:34 crc kubenswrapper[4687]: E0312 16:06:34.323809 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" containerName="oauth-openshift" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.323822 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" containerName="oauth-openshift" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.324145 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="296d3d6d-ef43-4965-a804-d1bb0c126d64" containerName="installer" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.324252 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" containerName="oauth-openshift" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.324282 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.324338 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e9f83d14-7e88-4e8b-9dc3-5fecc4a73fd4" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.324944 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.325176 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f","openshift-controller-manager/controller-manager-7989f78bdf-zgktt"] Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.325407 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" podUID="b82495e4-c994-496e-9813-d6c64e65caba" containerName="controller-manager" containerID="cri-o://7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec" gracePeriod=30 Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.325589 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.325696 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" podUID="b9f7020e-502f-4991-b432-21db9ae6af07" containerName="route-controller-manager" containerID="cri-o://8ddfc499c53ac77be62d2bc7e496ba0ba148a792452edf68059a7bd0876c84c4" gracePeriod=30 Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.327557 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.332051 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.332165 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.332174 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.332552 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.332589 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.332709 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.333183 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.333295 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.333373 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.333643 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.334468 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.334538 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.334607 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.337169 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.337874 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.345605 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.349706 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.368997 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.372262 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.412972 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-audit-policies\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413320 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413349 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-error\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413441 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1d878fd-d9a8-4044-9fce-70e660b7fcad-audit-dir\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413465 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-session\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-service-ca\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413523 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24d6\" (UniqueName: \"kubernetes.io/projected/93d7f52a-a823-4b24-a4f6-740b496c4540-kube-api-access-n24d6\") pod \"auto-csr-approver-29555526-j8xjw\" (UID: \"93d7f52a-a823-4b24-a4f6-740b496c4540\") " pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413577 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5glm\" (UniqueName: \"kubernetes.io/projected/b1d878fd-d9a8-4044-9fce-70e660b7fcad-kube-api-access-r5glm\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-router-certs\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-login\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.413644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.420097 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.420081756 podStartE2EDuration="26.420081756s" podCreationTimestamp="2026-03-12 16:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:34.419809199 +0000 UTC m=+243.383771553" watchObservedRunningTime="2026-03-12 16:06:34.420081756 +0000 UTC m=+243.384044100" Mar 12 16:06:34 crc kubenswrapper[4687]: E0312 16:06:34.431258 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82495e4_c994_496e_9813_d6c64e65caba.slice/crio-7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82495e4_c994_496e_9813_d6c64e65caba.slice/crio-conmon-7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9f7020e_502f_4991_b432_21db9ae6af07.slice/crio-8ddfc499c53ac77be62d2bc7e496ba0ba148a792452edf68059a7bd0876c84c4.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5glm\" (UniqueName: \"kubernetes.io/projected/b1d878fd-d9a8-4044-9fce-70e660b7fcad-kube-api-access-r5glm\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-router-certs\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515420 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-login\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515477 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-audit-policies\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515512 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-error\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1d878fd-d9a8-4044-9fce-70e660b7fcad-audit-dir\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515606 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-session\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515623 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-service-ca\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.515654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24d6\" (UniqueName: \"kubernetes.io/projected/93d7f52a-a823-4b24-a4f6-740b496c4540-kube-api-access-n24d6\") pod \"auto-csr-approver-29555526-j8xjw\" (UID: \"93d7f52a-a823-4b24-a4f6-740b496c4540\") " pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.518684 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.519554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-service-ca\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.519733 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1d878fd-d9a8-4044-9fce-70e660b7fcad-audit-dir\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.519738 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.519744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1d878fd-d9a8-4044-9fce-70e660b7fcad-audit-policies\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.521239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-router-certs\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.521619 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.528632 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-session\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.529774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-error\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.529948 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5glm\" (UniqueName: \"kubernetes.io/projected/b1d878fd-d9a8-4044-9fce-70e660b7fcad-kube-api-access-r5glm\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.534789 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.535306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24d6\" (UniqueName: \"kubernetes.io/projected/93d7f52a-a823-4b24-a4f6-740b496c4540-kube-api-access-n24d6\") pod \"auto-csr-approver-29555526-j8xjw\" (UID: \"93d7f52a-a823-4b24-a4f6-740b496c4540\") " pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.536019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.536869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-user-template-login\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.536870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d878fd-d9a8-4044-9fce-70e660b7fcad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85df6bd6d5-qdxtd\" (UID: \"b1d878fd-d9a8-4044-9fce-70e660b7fcad\") " pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.684806 4687 generic.go:334] "Generic (PLEG): container finished" podID="b9f7020e-502f-4991-b432-21db9ae6af07" containerID="8ddfc499c53ac77be62d2bc7e496ba0ba148a792452edf68059a7bd0876c84c4" exitCode=0 Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.684920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" event={"ID":"b9f7020e-502f-4991-b432-21db9ae6af07","Type":"ContainerDied","Data":"8ddfc499c53ac77be62d2bc7e496ba0ba148a792452edf68059a7bd0876c84c4"} Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.686897 4687 generic.go:334] "Generic (PLEG): container finished" podID="b82495e4-c994-496e-9813-d6c64e65caba" containerID="7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec" exitCode=0 Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.686978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" event={"ID":"b82495e4-c994-496e-9813-d6c64e65caba","Type":"ContainerDied","Data":"7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec"} Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.712610 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.770567 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.788770 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.804803 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.810836 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819305 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-client-ca\") pod \"b9f7020e-502f-4991-b432-21db9ae6af07\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819651 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82495e4-c994-496e-9813-d6c64e65caba-serving-cert\") pod \"b82495e4-c994-496e-9813-d6c64e65caba\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819680 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f7020e-502f-4991-b432-21db9ae6af07-serving-cert\") pod \"b9f7020e-502f-4991-b432-21db9ae6af07\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819726 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-config\") pod \"b82495e4-c994-496e-9813-d6c64e65caba\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819763 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-proxy-ca-bundles\") pod \"b82495e4-c994-496e-9813-d6c64e65caba\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819789 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-config\") pod \"b9f7020e-502f-4991-b432-21db9ae6af07\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819821 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-client-ca\") pod \"b82495e4-c994-496e-9813-d6c64e65caba\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819884 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvlnx\" (UniqueName: \"kubernetes.io/projected/b82495e4-c994-496e-9813-d6c64e65caba-kube-api-access-fvlnx\") pod \"b82495e4-c994-496e-9813-d6c64e65caba\" (UID: \"b82495e4-c994-496e-9813-d6c64e65caba\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.819914 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds8sx\" (UniqueName: \"kubernetes.io/projected/b9f7020e-502f-4991-b432-21db9ae6af07-kube-api-access-ds8sx\") pod \"b9f7020e-502f-4991-b432-21db9ae6af07\" (UID: \"b9f7020e-502f-4991-b432-21db9ae6af07\") " Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.820061 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9f7020e-502f-4991-b432-21db9ae6af07" (UID: "b9f7020e-502f-4991-b432-21db9ae6af07"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.820222 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.821184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-client-ca" (OuterVolumeSpecName: "client-ca") pod "b82495e4-c994-496e-9813-d6c64e65caba" (UID: "b82495e4-c994-496e-9813-d6c64e65caba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.821212 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b82495e4-c994-496e-9813-d6c64e65caba" (UID: "b82495e4-c994-496e-9813-d6c64e65caba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.821674 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-config" (OuterVolumeSpecName: "config") pod "b9f7020e-502f-4991-b432-21db9ae6af07" (UID: "b9f7020e-502f-4991-b432-21db9ae6af07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.821923 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-config" (OuterVolumeSpecName: "config") pod "b82495e4-c994-496e-9813-d6c64e65caba" (UID: "b82495e4-c994-496e-9813-d6c64e65caba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.823290 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f7020e-502f-4991-b432-21db9ae6af07-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9f7020e-502f-4991-b432-21db9ae6af07" (UID: "b9f7020e-502f-4991-b432-21db9ae6af07"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.823459 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f7020e-502f-4991-b432-21db9ae6af07-kube-api-access-ds8sx" (OuterVolumeSpecName: "kube-api-access-ds8sx") pod "b9f7020e-502f-4991-b432-21db9ae6af07" (UID: "b9f7020e-502f-4991-b432-21db9ae6af07"). InnerVolumeSpecName "kube-api-access-ds8sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.824328 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82495e4-c994-496e-9813-d6c64e65caba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b82495e4-c994-496e-9813-d6c64e65caba" (UID: "b82495e4-c994-496e-9813-d6c64e65caba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.824490 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82495e4-c994-496e-9813-d6c64e65caba-kube-api-access-fvlnx" (OuterVolumeSpecName: "kube-api-access-fvlnx") pod "b82495e4-c994-496e-9813-d6c64e65caba" (UID: "b82495e4-c994-496e-9813-d6c64e65caba"). InnerVolumeSpecName "kube-api-access-fvlnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.873629 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924037 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvlnx\" (UniqueName: \"kubernetes.io/projected/b82495e4-c994-496e-9813-d6c64e65caba-kube-api-access-fvlnx\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924070 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds8sx\" (UniqueName: \"kubernetes.io/projected/b9f7020e-502f-4991-b432-21db9ae6af07-kube-api-access-ds8sx\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924084 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82495e4-c994-496e-9813-d6c64e65caba-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924097 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f7020e-502f-4991-b432-21db9ae6af07-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924109 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924123 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924135 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f7020e-502f-4991-b432-21db9ae6af07-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.924146 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b82495e4-c994-496e-9813-d6c64e65caba-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:34 crc kubenswrapper[4687]: I0312 16:06:34.985834 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.002321 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.173319 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555526-j8xjw"] Mar 12 16:06:35 crc kubenswrapper[4687]: W0312 16:06:35.177922 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d7f52a_a823_4b24_a4f6_740b496c4540.slice/crio-e237e9a97a8d60af3d19c57da8e46f39df6bfd318aad80cf2d84938d115af759 WatchSource:0}: Error finding container e237e9a97a8d60af3d19c57da8e46f39df6bfd318aad80cf2d84938d115af759: Status 404 returned error can't find the container with id e237e9a97a8d60af3d19c57da8e46f39df6bfd318aad80cf2d84938d115af759 Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.216583 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd"] Mar 12 16:06:35 crc kubenswrapper[4687]: W0312 16:06:35.222760 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d878fd_d9a8_4044_9fce_70e660b7fcad.slice/crio-95550a29e72534f027ea0f8fb2ef9d83973b5113ab4562bb4e49886d637ae82d WatchSource:0}: Error finding container 95550a29e72534f027ea0f8fb2ef9d83973b5113ab4562bb4e49886d637ae82d: Status 404 returned error can't find the container with id 95550a29e72534f027ea0f8fb2ef9d83973b5113ab4562bb4e49886d637ae82d Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.696221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" event={"ID":"b1d878fd-d9a8-4044-9fce-70e660b7fcad","Type":"ContainerStarted","Data":"a8a1a73f16889133cda48ef8456502d7c2a9e455a0098684eb7c44f6e43592a7"} Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.696588 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.696602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" event={"ID":"b1d878fd-d9a8-4044-9fce-70e660b7fcad","Type":"ContainerStarted","Data":"95550a29e72534f027ea0f8fb2ef9d83973b5113ab4562bb4e49886d637ae82d"} Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.697933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" event={"ID":"b9f7020e-502f-4991-b432-21db9ae6af07","Type":"ContainerDied","Data":"7d883d1b9cd54957721b85e8224b5a36818f7a68b093484396b1afe317c68530"} Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.697977 4687 scope.go:117] "RemoveContainer" containerID="8ddfc499c53ac77be62d2bc7e496ba0ba148a792452edf68059a7bd0876c84c4" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.698074 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.701919 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" event={"ID":"b82495e4-c994-496e-9813-d6c64e65caba","Type":"ContainerDied","Data":"94cdc458dfd0a883c4da27299bbd2157b8abea11bb763a4bc561e224b891b673"} Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.701991 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7989f78bdf-zgktt" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.710573 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" event={"ID":"93d7f52a-a823-4b24-a4f6-740b496c4540","Type":"ContainerStarted","Data":"e237e9a97a8d60af3d19c57da8e46f39df6bfd318aad80cf2d84938d115af759"} Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.724855 4687 scope.go:117] "RemoveContainer" containerID="7724527d846751a7246e20f20448b9c681d2019619a95356ad6fd3bf0a5f18ec" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.730395 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podStartSLOduration=62.730381593 podStartE2EDuration="1m2.730381593s" podCreationTimestamp="2026-03-12 16:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:35.729730626 +0000 UTC m=+244.693692970" watchObservedRunningTime="2026-03-12 16:06:35.730381593 +0000 UTC m=+244.694343937" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.738720 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2ffa76-a50a-4e78-994b-1d7fd6a83ddb" path="/var/lib/kubelet/pods/df2ffa76-a50a-4e78-994b-1d7fd6a83ddb/volumes" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.747534 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f"] Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.751966 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dccfd4cdc-wsv7f"] Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.757478 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7989f78bdf-zgktt"] Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.760132 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7989f78bdf-zgktt"] Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.837625 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.887516 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf"] Mar 12 16:06:35 crc kubenswrapper[4687]: E0312 16:06:35.887729 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82495e4-c994-496e-9813-d6c64e65caba" containerName="controller-manager" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.887740 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82495e4-c994-496e-9813-d6c64e65caba" containerName="controller-manager" Mar 12 16:06:35 crc kubenswrapper[4687]: E0312 16:06:35.887752 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f7020e-502f-4991-b432-21db9ae6af07" containerName="route-controller-manager" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.887758 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f7020e-502f-4991-b432-21db9ae6af07" containerName="route-controller-manager" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.888039 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82495e4-c994-496e-9813-d6c64e65caba" containerName="controller-manager" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.888051 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f7020e-502f-4991-b432-21db9ae6af07" containerName="route-controller-manager" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.888420 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.892001 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.892285 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.892736 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.892754 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.893038 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.893212 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.894052 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p"] Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.894692 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.896514 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.896925 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.897073 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.897247 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.897482 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.900032 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.905100 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.924178 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf"] Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.938582 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-client-ca\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.938660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8mrr\" (UniqueName: \"kubernetes.io/projected/0bc71582-fbeb-4895-8921-949f66c5f1d1-kube-api-access-q8mrr\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.938692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-config\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.938753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-config\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.939393 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc71582-fbeb-4895-8921-949f66c5f1d1-serving-cert\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.939428 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-proxy-ca-bundles\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.939496 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9a4752-d308-49ea-b36b-0380c4fc3642-serving-cert\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.939514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7dd\" (UniqueName: \"kubernetes.io/projected/7e9a4752-d308-49ea-b36b-0380c4fc3642-kube-api-access-cc7dd\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.939535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-client-ca\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:35 crc kubenswrapper[4687]: I0312 16:06:35.949108 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p"] Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.040986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-config\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.041067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc71582-fbeb-4895-8921-949f66c5f1d1-serving-cert\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.041093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-proxy-ca-bundles\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.041928 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9a4752-d308-49ea-b36b-0380c4fc3642-serving-cert\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.041954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7dd\" (UniqueName: \"kubernetes.io/projected/7e9a4752-d308-49ea-b36b-0380c4fc3642-kube-api-access-cc7dd\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.041979 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-client-ca\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.042008 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-client-ca\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.042039 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8mrr\" (UniqueName: \"kubernetes.io/projected/0bc71582-fbeb-4895-8921-949f66c5f1d1-kube-api-access-q8mrr\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.042061 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-config\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.042898 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-proxy-ca-bundles\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.042995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-client-ca\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.043128 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-config\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.043507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-client-ca\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.044263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-config\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.055301 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc71582-fbeb-4895-8921-949f66c5f1d1-serving-cert\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.056181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9a4752-d308-49ea-b36b-0380c4fc3642-serving-cert\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.058980 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7dd\" (UniqueName: \"kubernetes.io/projected/7e9a4752-d308-49ea-b36b-0380c4fc3642-kube-api-access-cc7dd\") pod \"controller-manager-7c7cd7d4c9-5lz8p\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.062102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8mrr\" (UniqueName: \"kubernetes.io/projected/0bc71582-fbeb-4895-8921-949f66c5f1d1-kube-api-access-q8mrr\") pod \"route-controller-manager-86cc6575b9-t7npf\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.084857 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.085211 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.214955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.229544 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:36 crc kubenswrapper[4687]: W0312 16:06:36.597606 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc71582_fbeb_4895_8921_949f66c5f1d1.slice/crio-8c143fac954dbb95e68591ba03ba7ef904a6eb9abb27013ab95af70d164adccc WatchSource:0}: Error finding container 8c143fac954dbb95e68591ba03ba7ef904a6eb9abb27013ab95af70d164adccc: Status 404 returned error can't find the container with id 8c143fac954dbb95e68591ba03ba7ef904a6eb9abb27013ab95af70d164adccc Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.598223 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf"] Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.640094 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p"] Mar 12 16:06:36 crc kubenswrapper[4687]: W0312 16:06:36.651259 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9a4752_d308_49ea_b36b_0380c4fc3642.slice/crio-33914dab3389d74aa5015bdf713e62b5af93831ab3efdd88b8f830f8e64af93e WatchSource:0}: Error finding container 33914dab3389d74aa5015bdf713e62b5af93831ab3efdd88b8f830f8e64af93e: Status 404 returned error can't find the container with id 33914dab3389d74aa5015bdf713e62b5af93831ab3efdd88b8f830f8e64af93e Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.716791 4687 generic.go:334] "Generic (PLEG): container finished" podID="93d7f52a-a823-4b24-a4f6-740b496c4540" containerID="cf4c4148630e5e513b65bdc907eb35681fcf137aee8e26bdd774daae8364a56c" exitCode=0 Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.716861 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" event={"ID":"93d7f52a-a823-4b24-a4f6-740b496c4540","Type":"ContainerDied","Data":"cf4c4148630e5e513b65bdc907eb35681fcf137aee8e26bdd774daae8364a56c"} Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.718571 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" event={"ID":"0bc71582-fbeb-4895-8921-949f66c5f1d1","Type":"ContainerStarted","Data":"8c143fac954dbb95e68591ba03ba7ef904a6eb9abb27013ab95af70d164adccc"} Mar 12 16:06:36 crc kubenswrapper[4687]: I0312 16:06:36.720569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" event={"ID":"7e9a4752-d308-49ea-b36b-0380c4fc3642","Type":"ContainerStarted","Data":"33914dab3389d74aa5015bdf713e62b5af93831ab3efdd88b8f830f8e64af93e"} Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.594835 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.731754 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" event={"ID":"7e9a4752-d308-49ea-b36b-0380c4fc3642","Type":"ContainerStarted","Data":"63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793"} Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.749076 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82495e4-c994-496e-9813-d6c64e65caba" path="/var/lib/kubelet/pods/b82495e4-c994-496e-9813-d6c64e65caba/volumes" Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.749675 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f7020e-502f-4991-b432-21db9ae6af07" path="/var/lib/kubelet/pods/b9f7020e-502f-4991-b432-21db9ae6af07/volumes" Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.750274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" event={"ID":"0bc71582-fbeb-4895-8921-949f66c5f1d1","Type":"ContainerStarted","Data":"13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b"} Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.750318 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.750396 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.756955 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" podStartSLOduration=4.756937603 podStartE2EDuration="4.756937603s" podCreationTimestamp="2026-03-12 16:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:37.750008527 +0000 UTC m=+246.713970871" watchObservedRunningTime="2026-03-12 16:06:37.756937603 +0000 UTC m=+246.720899947" Mar 12 16:06:37 crc kubenswrapper[4687]: I0312 16:06:37.771239 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" podStartSLOduration=4.771219827 podStartE2EDuration="4.771219827s" podCreationTimestamp="2026-03-12 16:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:37.767945879 +0000 UTC m=+246.731908243" watchObservedRunningTime="2026-03-12 16:06:37.771219827 +0000 UTC m=+246.735182171" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.037845 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.168837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n24d6\" (UniqueName: \"kubernetes.io/projected/93d7f52a-a823-4b24-a4f6-740b496c4540-kube-api-access-n24d6\") pod \"93d7f52a-a823-4b24-a4f6-740b496c4540\" (UID: \"93d7f52a-a823-4b24-a4f6-740b496c4540\") " Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.174669 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d7f52a-a823-4b24-a4f6-740b496c4540-kube-api-access-n24d6" (OuterVolumeSpecName: "kube-api-access-n24d6") pod "93d7f52a-a823-4b24-a4f6-740b496c4540" (UID: "93d7f52a-a823-4b24-a4f6-740b496c4540"). InnerVolumeSpecName "kube-api-access-n24d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.270541 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n24d6\" (UniqueName: \"kubernetes.io/projected/93d7f52a-a823-4b24-a4f6-740b496c4540-kube-api-access-n24d6\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.739988 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.740084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555526-j8xjw" event={"ID":"93d7f52a-a823-4b24-a4f6-740b496c4540","Type":"ContainerDied","Data":"e237e9a97a8d60af3d19c57da8e46f39df6bfd318aad80cf2d84938d115af759"} Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.740128 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e237e9a97a8d60af3d19c57da8e46f39df6bfd318aad80cf2d84938d115af759" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.740435 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:38 crc kubenswrapper[4687]: I0312 16:06:38.748076 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:42 crc kubenswrapper[4687]: I0312 16:06:42.481548 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 16:06:42 crc kubenswrapper[4687]: I0312 16:06:42.481788 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5f0f99fff29f6fb9e8567dde5e7f46a9649e658b8ca0c7d1b015f9732832b675" gracePeriod=5 Mar 12 16:06:44 crc kubenswrapper[4687]: I0312 16:06:44.122057 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:06:44 crc kubenswrapper[4687]: I0312 16:06:44.122216 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:06:47 crc kubenswrapper[4687]: I0312 16:06:47.800464 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 16:06:47 crc kubenswrapper[4687]: I0312 16:06:47.800517 4687 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5f0f99fff29f6fb9e8567dde5e7f46a9649e658b8ca0c7d1b015f9732832b675" exitCode=137 Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.043168 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.043250 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112177 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112325 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112417 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112405 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112453 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112454 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112696 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112719 4687 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112737 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.112752 4687 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.122098 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.214259 4687 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.809339 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.809650 4687 scope.go:117] "RemoveContainer" containerID="5f0f99fff29f6fb9e8567dde5e7f46a9649e658b8ca0c7d1b015f9732832b675" Mar 12 16:06:48 crc kubenswrapper[4687]: I0312 16:06:48.809716 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 16:06:49 crc kubenswrapper[4687]: I0312 16:06:49.740092 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 12 16:06:49 crc kubenswrapper[4687]: I0312 16:06:49.740336 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 12 16:06:49 crc kubenswrapper[4687]: I0312 16:06:49.750423 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 16:06:49 crc kubenswrapper[4687]: I0312 16:06:49.750456 4687 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="38395e9e-5504-4545-930b-13668ebe8f92" Mar 12 16:06:49 crc kubenswrapper[4687]: I0312 16:06:49.753209 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 16:06:49 crc kubenswrapper[4687]: I0312 16:06:49.753240 4687 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="38395e9e-5504-4545-930b-13668ebe8f92" Mar 12 16:06:53 crc kubenswrapper[4687]: I0312 16:06:53.868907 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p"] Mar 12 16:06:53 crc kubenswrapper[4687]: I0312 16:06:53.869749 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" podUID="7e9a4752-d308-49ea-b36b-0380c4fc3642" containerName="controller-manager" containerID="cri-o://63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793" gracePeriod=30 Mar 12 16:06:53 crc kubenswrapper[4687]: I0312 16:06:53.880408 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf"] Mar 12 16:06:53 crc kubenswrapper[4687]: I0312 16:06:53.880658 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" podUID="0bc71582-fbeb-4895-8921-949f66c5f1d1" containerName="route-controller-manager" containerID="cri-o://13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b" gracePeriod=30 Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.365853 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.428277 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494482 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-client-ca\") pod \"7e9a4752-d308-49ea-b36b-0380c4fc3642\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494556 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8mrr\" (UniqueName: \"kubernetes.io/projected/0bc71582-fbeb-4895-8921-949f66c5f1d1-kube-api-access-q8mrr\") pod \"0bc71582-fbeb-4895-8921-949f66c5f1d1\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-config\") pod \"7e9a4752-d308-49ea-b36b-0380c4fc3642\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494634 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc71582-fbeb-4895-8921-949f66c5f1d1-serving-cert\") pod \"0bc71582-fbeb-4895-8921-949f66c5f1d1\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494667 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7dd\" (UniqueName: \"kubernetes.io/projected/7e9a4752-d308-49ea-b36b-0380c4fc3642-kube-api-access-cc7dd\") pod \"7e9a4752-d308-49ea-b36b-0380c4fc3642\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494686 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-config\") pod \"0bc71582-fbeb-4895-8921-949f66c5f1d1\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494716 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-client-ca\") pod \"0bc71582-fbeb-4895-8921-949f66c5f1d1\" (UID: \"0bc71582-fbeb-4895-8921-949f66c5f1d1\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494743 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9a4752-d308-49ea-b36b-0380c4fc3642-serving-cert\") pod \"7e9a4752-d308-49ea-b36b-0380c4fc3642\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.494760 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-proxy-ca-bundles\") pod \"7e9a4752-d308-49ea-b36b-0380c4fc3642\" (UID: \"7e9a4752-d308-49ea-b36b-0380c4fc3642\") " Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.495718 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e9a4752-d308-49ea-b36b-0380c4fc3642" (UID: "7e9a4752-d308-49ea-b36b-0380c4fc3642"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.495812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e9a4752-d308-49ea-b36b-0380c4fc3642" (UID: "7e9a4752-d308-49ea-b36b-0380c4fc3642"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.496050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-client-ca" (OuterVolumeSpecName: "client-ca") pod "0bc71582-fbeb-4895-8921-949f66c5f1d1" (UID: "0bc71582-fbeb-4895-8921-949f66c5f1d1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.496202 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-config" (OuterVolumeSpecName: "config") pod "7e9a4752-d308-49ea-b36b-0380c4fc3642" (UID: "7e9a4752-d308-49ea-b36b-0380c4fc3642"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.496659 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-config" (OuterVolumeSpecName: "config") pod "0bc71582-fbeb-4895-8921-949f66c5f1d1" (UID: "0bc71582-fbeb-4895-8921-949f66c5f1d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.500020 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9a4752-d308-49ea-b36b-0380c4fc3642-kube-api-access-cc7dd" (OuterVolumeSpecName: "kube-api-access-cc7dd") pod "7e9a4752-d308-49ea-b36b-0380c4fc3642" (UID: "7e9a4752-d308-49ea-b36b-0380c4fc3642"). InnerVolumeSpecName "kube-api-access-cc7dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.500051 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9a4752-d308-49ea-b36b-0380c4fc3642-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e9a4752-d308-49ea-b36b-0380c4fc3642" (UID: "7e9a4752-d308-49ea-b36b-0380c4fc3642"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.500575 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc71582-fbeb-4895-8921-949f66c5f1d1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0bc71582-fbeb-4895-8921-949f66c5f1d1" (UID: "0bc71582-fbeb-4895-8921-949f66c5f1d1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.500604 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc71582-fbeb-4895-8921-949f66c5f1d1-kube-api-access-q8mrr" (OuterVolumeSpecName: "kube-api-access-q8mrr") pod "0bc71582-fbeb-4895-8921-949f66c5f1d1" (UID: "0bc71582-fbeb-4895-8921-949f66c5f1d1"). InnerVolumeSpecName "kube-api-access-q8mrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.595660 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.595886 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc71582-fbeb-4895-8921-949f66c5f1d1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.595975 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.596092 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc7dd\" (UniqueName: \"kubernetes.io/projected/7e9a4752-d308-49ea-b36b-0380c4fc3642-kube-api-access-cc7dd\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.596186 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bc71582-fbeb-4895-8921-949f66c5f1d1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.596272 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9a4752-d308-49ea-b36b-0380c4fc3642-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.596376 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.596460 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9a4752-d308-49ea-b36b-0380c4fc3642-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.596538 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8mrr\" (UniqueName: \"kubernetes.io/projected/0bc71582-fbeb-4895-8921-949f66c5f1d1-kube-api-access-q8mrr\") on node \"crc\" DevicePath \"\"" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.853854 4687 generic.go:334] "Generic (PLEG): container finished" podID="0bc71582-fbeb-4895-8921-949f66c5f1d1" containerID="13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b" exitCode=0 Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.853921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" event={"ID":"0bc71582-fbeb-4895-8921-949f66c5f1d1","Type":"ContainerDied","Data":"13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b"} Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.853952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" event={"ID":"0bc71582-fbeb-4895-8921-949f66c5f1d1","Type":"ContainerDied","Data":"8c143fac954dbb95e68591ba03ba7ef904a6eb9abb27013ab95af70d164adccc"} Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.853968 4687 scope.go:117] "RemoveContainer" containerID="13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.854068 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.862271 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e9a4752-d308-49ea-b36b-0380c4fc3642" containerID="63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793" exitCode=0 Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.862330 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" event={"ID":"7e9a4752-d308-49ea-b36b-0380c4fc3642","Type":"ContainerDied","Data":"63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793"} Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.862368 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.862399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p" event={"ID":"7e9a4752-d308-49ea-b36b-0380c4fc3642","Type":"ContainerDied","Data":"33914dab3389d74aa5015bdf713e62b5af93831ab3efdd88b8f830f8e64af93e"} Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.883583 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf"] Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.890929 4687 scope.go:117] "RemoveContainer" containerID="13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.891135 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-t7npf"] Mar 12 16:06:54 crc kubenswrapper[4687]: E0312 16:06:54.891409 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b\": container with ID starting with 13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b not found: ID does not exist" containerID="13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.891438 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b"} err="failed to get container status \"13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b\": rpc error: code = NotFound desc = could not find container \"13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b\": container with ID starting with 13f09db9a008d877f4436fd597be62e5bfaee792a7ff56eb9522f614cb8b160b not found: ID does not exist" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.891455 4687 scope.go:117] "RemoveContainer" containerID="63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.901450 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p"] Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.907023 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-5lz8p"] Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.910641 4687 scope.go:117] "RemoveContainer" containerID="63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793" Mar 12 16:06:54 crc kubenswrapper[4687]: E0312 16:06:54.912127 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793\": container with ID starting with 63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793 not found: ID does not exist" containerID="63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793" Mar 12 16:06:54 crc kubenswrapper[4687]: I0312 16:06:54.912183 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793"} err="failed to get container status \"63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793\": rpc error: code = NotFound desc = could not find container \"63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793\": container with ID starting with 63f83fcbaea48dd71e5c78a63fecab32433ed304c37b5e8b43b067b9b9050793 not found: ID does not exist" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.745785 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc71582-fbeb-4895-8921-949f66c5f1d1" path="/var/lib/kubelet/pods/0bc71582-fbeb-4895-8921-949f66c5f1d1/volumes" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.747258 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9a4752-d308-49ea-b36b-0380c4fc3642" path="/var/lib/kubelet/pods/7e9a4752-d308-49ea-b36b-0380c4fc3642/volumes" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.910658 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77b475f48d-tkhcl"] Mar 12 16:06:55 crc kubenswrapper[4687]: E0312 16:06:55.910902 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.910913 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 16:06:55 crc kubenswrapper[4687]: E0312 16:06:55.910950 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc71582-fbeb-4895-8921-949f66c5f1d1" containerName="route-controller-manager" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.910957 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc71582-fbeb-4895-8921-949f66c5f1d1" containerName="route-controller-manager" Mar 12 16:06:55 crc kubenswrapper[4687]: E0312 16:06:55.910968 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9a4752-d308-49ea-b36b-0380c4fc3642" containerName="controller-manager" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.910974 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9a4752-d308-49ea-b36b-0380c4fc3642" containerName="controller-manager" Mar 12 16:06:55 crc kubenswrapper[4687]: E0312 16:06:55.910983 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d7f52a-a823-4b24-a4f6-740b496c4540" containerName="oc" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.910988 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d7f52a-a823-4b24-a4f6-740b496c4540" containerName="oc" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.911089 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.911100 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc71582-fbeb-4895-8921-949f66c5f1d1" containerName="route-controller-manager" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.911110 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9a4752-d308-49ea-b36b-0380c4fc3642" containerName="controller-manager" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.911116 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d7f52a-a823-4b24-a4f6-740b496c4540" containerName="oc" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.911470 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.913516 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.916675 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t"] Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.917904 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.918191 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.918659 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.918852 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.919496 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.921613 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t"] Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.921773 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.921803 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.921934 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.923324 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.924391 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.925693 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b475f48d-tkhcl"] Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.928819 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.929268 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:06:55 crc kubenswrapper[4687]: I0312 16:06:55.929554 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.016746 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppz4k\" (UniqueName: \"kubernetes.io/projected/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-kube-api-access-ppz4k\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.016881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e275100-2fbf-40d4-9b03-d069454ebe8a-serving-cert\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.016928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-proxy-ca-bundles\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.016963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-client-ca\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.016999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7mml\" (UniqueName: \"kubernetes.io/projected/5e275100-2fbf-40d4-9b03-d069454ebe8a-kube-api-access-j7mml\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.017047 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-config\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.017109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-config\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.017249 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-serving-cert\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.017329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-client-ca\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118692 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppz4k\" (UniqueName: \"kubernetes.io/projected/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-kube-api-access-ppz4k\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118800 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e275100-2fbf-40d4-9b03-d069454ebe8a-serving-cert\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118829 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-client-ca\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-proxy-ca-bundles\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7mml\" (UniqueName: \"kubernetes.io/projected/5e275100-2fbf-40d4-9b03-d069454ebe8a-kube-api-access-j7mml\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-config\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.118963 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-config\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.119018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-serving-cert\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.119058 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-client-ca\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.121053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-client-ca\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.121090 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-client-ca\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.121588 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-proxy-ca-bundles\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.121640 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-config\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.122998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-config\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.125602 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e275100-2fbf-40d4-9b03-d069454ebe8a-serving-cert\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.132047 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-serving-cert\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.141400 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppz4k\" (UniqueName: \"kubernetes.io/projected/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-kube-api-access-ppz4k\") pod \"controller-manager-77b475f48d-tkhcl\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.142647 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7mml\" (UniqueName: \"kubernetes.io/projected/5e275100-2fbf-40d4-9b03-d069454ebe8a-kube-api-access-j7mml\") pod \"route-controller-manager-d6ddf6c78-5jt4t\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.238915 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.249654 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.485115 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b475f48d-tkhcl"] Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.532348 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t"] Mar 12 16:06:56 crc kubenswrapper[4687]: W0312 16:06:56.538477 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e275100_2fbf_40d4_9b03_d069454ebe8a.slice/crio-cdd0152bcfeaa3b26439dc00b4edb90337d2ccb170cc013ddac4efd46acfb1a7 WatchSource:0}: Error finding container cdd0152bcfeaa3b26439dc00b4edb90337d2ccb170cc013ddac4efd46acfb1a7: Status 404 returned error can't find the container with id cdd0152bcfeaa3b26439dc00b4edb90337d2ccb170cc013ddac4efd46acfb1a7 Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.875452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" event={"ID":"5e275100-2fbf-40d4-9b03-d069454ebe8a","Type":"ContainerStarted","Data":"c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2"} Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.875789 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" event={"ID":"5e275100-2fbf-40d4-9b03-d069454ebe8a","Type":"ContainerStarted","Data":"cdd0152bcfeaa3b26439dc00b4edb90337d2ccb170cc013ddac4efd46acfb1a7"} Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.875819 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.876980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" event={"ID":"b9bb6e41-489a-446a-bf6f-520bbd1ade3a","Type":"ContainerStarted","Data":"7f07f0b3727fe6a53e5960454fbff331777a0ef48c1a07587a7c01e2255d5f08"} Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.877020 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" event={"ID":"b9bb6e41-489a-446a-bf6f-520bbd1ade3a","Type":"ContainerStarted","Data":"93b803ddc68dcd8dbfb5636fa84bafffe3de156fd1dc3e9b59e6813b6a12355d"} Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.877202 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.881011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.894892 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" podStartSLOduration=3.894877605 podStartE2EDuration="3.894877605s" podCreationTimestamp="2026-03-12 16:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:56.891378241 +0000 UTC m=+265.855340585" watchObservedRunningTime="2026-03-12 16:06:56.894877605 +0000 UTC m=+265.858839969" Mar 12 16:06:56 crc kubenswrapper[4687]: I0312 16:06:56.907202 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" podStartSLOduration=3.907188666 podStartE2EDuration="3.907188666s" podCreationTimestamp="2026-03-12 16:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:06:56.906641272 +0000 UTC m=+265.870603616" watchObservedRunningTime="2026-03-12 16:06:56.907188666 +0000 UTC m=+265.871151020" Mar 12 16:06:57 crc kubenswrapper[4687]: I0312 16:06:57.012543 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:07:13 crc kubenswrapper[4687]: I0312 16:07:13.829192 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77b475f48d-tkhcl"] Mar 12 16:07:13 crc kubenswrapper[4687]: I0312 16:07:13.830203 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" podUID="b9bb6e41-489a-446a-bf6f-520bbd1ade3a" containerName="controller-manager" containerID="cri-o://7f07f0b3727fe6a53e5960454fbff331777a0ef48c1a07587a7c01e2255d5f08" gracePeriod=30 Mar 12 16:07:13 crc kubenswrapper[4687]: I0312 16:07:13.852506 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t"] Mar 12 16:07:13 crc kubenswrapper[4687]: I0312 16:07:13.852753 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" podUID="5e275100-2fbf-40d4-9b03-d069454ebe8a" containerName="route-controller-manager" containerID="cri-o://c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2" gracePeriod=30 Mar 12 16:07:13 crc kubenswrapper[4687]: I0312 16:07:13.965790 4687 generic.go:334] "Generic (PLEG): container finished" podID="b9bb6e41-489a-446a-bf6f-520bbd1ade3a" containerID="7f07f0b3727fe6a53e5960454fbff331777a0ef48c1a07587a7c01e2255d5f08" exitCode=0 Mar 12 16:07:13 crc kubenswrapper[4687]: I0312 16:07:13.965839 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" event={"ID":"b9bb6e41-489a-446a-bf6f-520bbd1ade3a","Type":"ContainerDied","Data":"7f07f0b3727fe6a53e5960454fbff331777a0ef48c1a07587a7c01e2255d5f08"} Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.122016 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.122351 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.122423 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.123338 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.123418 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34" gracePeriod=600 Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.428961 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.471842 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568746 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e275100-2fbf-40d4-9b03-d069454ebe8a-serving-cert\") pod \"5e275100-2fbf-40d4-9b03-d069454ebe8a\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568783 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-config\") pod \"5e275100-2fbf-40d4-9b03-d069454ebe8a\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppz4k\" (UniqueName: \"kubernetes.io/projected/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-kube-api-access-ppz4k\") pod \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568830 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-proxy-ca-bundles\") pod \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568870 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-client-ca\") pod \"5e275100-2fbf-40d4-9b03-d069454ebe8a\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568900 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7mml\" (UniqueName: \"kubernetes.io/projected/5e275100-2fbf-40d4-9b03-d069454ebe8a-kube-api-access-j7mml\") pod \"5e275100-2fbf-40d4-9b03-d069454ebe8a\" (UID: \"5e275100-2fbf-40d4-9b03-d069454ebe8a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.568927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-serving-cert\") pod \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.569054 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-client-ca\") pod \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.569087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-config\") pod \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\" (UID: \"b9bb6e41-489a-446a-bf6f-520bbd1ade3a\") " Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.569956 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-config" (OuterVolumeSpecName: "config") pod "b9bb6e41-489a-446a-bf6f-520bbd1ade3a" (UID: "b9bb6e41-489a-446a-bf6f-520bbd1ade3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.569966 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9bb6e41-489a-446a-bf6f-520bbd1ade3a" (UID: "b9bb6e41-489a-446a-bf6f-520bbd1ade3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.570065 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b9bb6e41-489a-446a-bf6f-520bbd1ade3a" (UID: "b9bb6e41-489a-446a-bf6f-520bbd1ade3a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.570643 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-config" (OuterVolumeSpecName: "config") pod "5e275100-2fbf-40d4-9b03-d069454ebe8a" (UID: "5e275100-2fbf-40d4-9b03-d069454ebe8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.572690 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e275100-2fbf-40d4-9b03-d069454ebe8a" (UID: "5e275100-2fbf-40d4-9b03-d069454ebe8a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.574070 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9bb6e41-489a-446a-bf6f-520bbd1ade3a" (UID: "b9bb6e41-489a-446a-bf6f-520bbd1ade3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.574687 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e275100-2fbf-40d4-9b03-d069454ebe8a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e275100-2fbf-40d4-9b03-d069454ebe8a" (UID: "5e275100-2fbf-40d4-9b03-d069454ebe8a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.575327 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e275100-2fbf-40d4-9b03-d069454ebe8a-kube-api-access-j7mml" (OuterVolumeSpecName: "kube-api-access-j7mml") pod "5e275100-2fbf-40d4-9b03-d069454ebe8a" (UID: "5e275100-2fbf-40d4-9b03-d069454ebe8a"). InnerVolumeSpecName "kube-api-access-j7mml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.575351 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-kube-api-access-ppz4k" (OuterVolumeSpecName: "kube-api-access-ppz4k") pod "b9bb6e41-489a-446a-bf6f-520bbd1ade3a" (UID: "b9bb6e41-489a-446a-bf6f-520bbd1ade3a"). InnerVolumeSpecName "kube-api-access-ppz4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670480 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670516 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7mml\" (UniqueName: \"kubernetes.io/projected/5e275100-2fbf-40d4-9b03-d069454ebe8a-kube-api-access-j7mml\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670528 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670536 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670545 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670553 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e275100-2fbf-40d4-9b03-d069454ebe8a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670588 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e275100-2fbf-40d4-9b03-d069454ebe8a-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670600 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppz4k\" (UniqueName: \"kubernetes.io/projected/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-kube-api-access-ppz4k\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.670610 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9bb6e41-489a-446a-bf6f-520bbd1ade3a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.911250 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs"] Mar 12 16:07:14 crc kubenswrapper[4687]: E0312 16:07:14.911740 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e275100-2fbf-40d4-9b03-d069454ebe8a" containerName="route-controller-manager" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.911752 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e275100-2fbf-40d4-9b03-d069454ebe8a" containerName="route-controller-manager" Mar 12 16:07:14 crc kubenswrapper[4687]: E0312 16:07:14.911763 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bb6e41-489a-446a-bf6f-520bbd1ade3a" containerName="controller-manager" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.911769 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bb6e41-489a-446a-bf6f-520bbd1ade3a" containerName="controller-manager" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.911851 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e275100-2fbf-40d4-9b03-d069454ebe8a" containerName="route-controller-manager" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.911898 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bb6e41-489a-446a-bf6f-520bbd1ade3a" containerName="controller-manager" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.912227 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.921558 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs"] Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.970845 4687 generic.go:334] "Generic (PLEG): container finished" podID="5e275100-2fbf-40d4-9b03-d069454ebe8a" containerID="c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2" exitCode=0 Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.970900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" event={"ID":"5e275100-2fbf-40d4-9b03-d069454ebe8a","Type":"ContainerDied","Data":"c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2"} Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.970926 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" event={"ID":"5e275100-2fbf-40d4-9b03-d069454ebe8a","Type":"ContainerDied","Data":"cdd0152bcfeaa3b26439dc00b4edb90337d2ccb170cc013ddac4efd46acfb1a7"} Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.970943 4687 scope.go:117] "RemoveContainer" containerID="c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.971030 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t" Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.975749 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34" exitCode=0 Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.975813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34"} Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.975832 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"ca9af5a60d7edd2ac85d52bd0521eb3c6c568aec927cc166905e24d8ea4a0e8a"} Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.977013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" event={"ID":"b9bb6e41-489a-446a-bf6f-520bbd1ade3a","Type":"ContainerDied","Data":"93b803ddc68dcd8dbfb5636fa84bafffe3de156fd1dc3e9b59e6813b6a12355d"} Mar 12 16:07:14 crc kubenswrapper[4687]: I0312 16:07:14.977049 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b475f48d-tkhcl" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.011262 4687 scope.go:117] "RemoveContainer" containerID="c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2" Mar 12 16:07:15 crc kubenswrapper[4687]: E0312 16:07:15.012056 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2\": container with ID starting with c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2 not found: ID does not exist" containerID="c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.012095 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2"} err="failed to get container status \"c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2\": rpc error: code = NotFound desc = could not find container \"c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2\": container with ID starting with c36e48338450804893ce50a81690ba00a8078d22366418707a2d9db40333d2a2 not found: ID does not exist" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.012120 4687 scope.go:117] "RemoveContainer" containerID="7f07f0b3727fe6a53e5960454fbff331777a0ef48c1a07587a7c01e2255d5f08" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.026698 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.033036 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-5jt4t"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.036627 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77b475f48d-tkhcl"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.041892 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77b475f48d-tkhcl"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.074691 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-client-ca\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.074727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc5989f-becf-4c6b-87ac-89e327bd07b6-serving-cert\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.074793 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-proxy-ca-bundles\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.074820 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ccgn\" (UniqueName: \"kubernetes.io/projected/2bc5989f-becf-4c6b-87ac-89e327bd07b6-kube-api-access-7ccgn\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.074849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-config\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.176239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-config\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.176320 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-client-ca\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.176340 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc5989f-becf-4c6b-87ac-89e327bd07b6-serving-cert\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.176392 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-proxy-ca-bundles\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.176420 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ccgn\" (UniqueName: \"kubernetes.io/projected/2bc5989f-becf-4c6b-87ac-89e327bd07b6-kube-api-access-7ccgn\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.177134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-client-ca\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.177694 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-proxy-ca-bundles\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.177732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bc5989f-becf-4c6b-87ac-89e327bd07b6-config\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.183501 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bc5989f-becf-4c6b-87ac-89e327bd07b6-serving-cert\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.195421 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ccgn\" (UniqueName: \"kubernetes.io/projected/2bc5989f-becf-4c6b-87ac-89e327bd07b6-kube-api-access-7ccgn\") pod \"controller-manager-7c7cd7d4c9-h8rfs\" (UID: \"2bc5989f-becf-4c6b-87ac-89e327bd07b6\") " pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.230237 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.657901 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.740027 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e275100-2fbf-40d4-9b03-d069454ebe8a" path="/var/lib/kubelet/pods/5e275100-2fbf-40d4-9b03-d069454ebe8a/volumes" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.740554 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bb6e41-489a-446a-bf6f-520bbd1ade3a" path="/var/lib/kubelet/pods/b9bb6e41-489a-446a-bf6f-520bbd1ade3a/volumes" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.917583 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.918612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.923772 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.924117 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.924283 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.924511 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86"] Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.924555 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.924938 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.924974 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.984162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" event={"ID":"2bc5989f-becf-4c6b-87ac-89e327bd07b6","Type":"ContainerStarted","Data":"68b689f7cf799083db3dac78348bcfb0ff24f01877b7bce7261a91b72eadfc06"} Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.984209 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" event={"ID":"2bc5989f-becf-4c6b-87ac-89e327bd07b6","Type":"ContainerStarted","Data":"0b1aebac968ba153f8f9aa1c801ed7b8776573960b6b3fd6e80b61af3fdbde91"} Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.984510 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:15 crc kubenswrapper[4687]: I0312 16:07:15.993109 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.023417 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podStartSLOduration=3.023396123 podStartE2EDuration="3.023396123s" podCreationTimestamp="2026-03-12 16:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:07:16.019624001 +0000 UTC m=+284.983586355" watchObservedRunningTime="2026-03-12 16:07:16.023396123 +0000 UTC m=+284.987358467" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.088777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7pm\" (UniqueName: \"kubernetes.io/projected/bfe3820e-9164-4f99-a317-95c69aa4df0e-kube-api-access-ts7pm\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.088863 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfe3820e-9164-4f99-a317-95c69aa4df0e-serving-cert\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.088884 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfe3820e-9164-4f99-a317-95c69aa4df0e-client-ca\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.088918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3820e-9164-4f99-a317-95c69aa4df0e-config\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.189893 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3820e-9164-4f99-a317-95c69aa4df0e-config\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.189967 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts7pm\" (UniqueName: \"kubernetes.io/projected/bfe3820e-9164-4f99-a317-95c69aa4df0e-kube-api-access-ts7pm\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.190012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfe3820e-9164-4f99-a317-95c69aa4df0e-serving-cert\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.190029 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfe3820e-9164-4f99-a317-95c69aa4df0e-client-ca\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.190987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfe3820e-9164-4f99-a317-95c69aa4df0e-client-ca\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.191206 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3820e-9164-4f99-a317-95c69aa4df0e-config\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.199032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfe3820e-9164-4f99-a317-95c69aa4df0e-serving-cert\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.209558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts7pm\" (UniqueName: \"kubernetes.io/projected/bfe3820e-9164-4f99-a317-95c69aa4df0e-kube-api-access-ts7pm\") pod \"route-controller-manager-86cc6575b9-cgt86\" (UID: \"bfe3820e-9164-4f99-a317-95c69aa4df0e\") " pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.238040 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.633689 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86"] Mar 12 16:07:16 crc kubenswrapper[4687]: W0312 16:07:16.645068 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfe3820e_9164_4f99_a317_95c69aa4df0e.slice/crio-4b44b2a517ee93e16524f7cda902a199c865a641aacd4bda7d6f2feca3402e67 WatchSource:0}: Error finding container 4b44b2a517ee93e16524f7cda902a199c865a641aacd4bda7d6f2feca3402e67: Status 404 returned error can't find the container with id 4b44b2a517ee93e16524f7cda902a199c865a641aacd4bda7d6f2feca3402e67 Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.990825 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" event={"ID":"bfe3820e-9164-4f99-a317-95c69aa4df0e","Type":"ContainerStarted","Data":"02dca7bee1b1a57992bccbd0da69a142a5d2da803b27e7a82ac9fc707a34b766"} Mar 12 16:07:16 crc kubenswrapper[4687]: I0312 16:07:16.990876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" event={"ID":"bfe3820e-9164-4f99-a317-95c69aa4df0e","Type":"ContainerStarted","Data":"4b44b2a517ee93e16524f7cda902a199c865a641aacd4bda7d6f2feca3402e67"} Mar 12 16:07:17 crc kubenswrapper[4687]: I0312 16:07:17.011591 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podStartSLOduration=4.011575328 podStartE2EDuration="4.011575328s" podCreationTimestamp="2026-03-12 16:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:07:17.009096121 +0000 UTC m=+285.973058465" watchObservedRunningTime="2026-03-12 16:07:17.011575328 +0000 UTC m=+285.975537672" Mar 12 16:07:17 crc kubenswrapper[4687]: I0312 16:07:17.998676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:18 crc kubenswrapper[4687]: I0312 16:07:18.007998 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.035456 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fdbl6"] Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.037428 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.070186 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fdbl6"] Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221232 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-registry-certificates\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221470 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-bound-sa-token\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221556 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-registry-tls\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221633 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5n8\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-kube-api-access-jd5n8\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-trusted-ca\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.221877 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.250332 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.323196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-bound-sa-token\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.323278 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-registry-tls\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.323323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd5n8\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-kube-api-access-jd5n8\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.323412 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-trusted-ca\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.324822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.324874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.324852 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-trusted-ca\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.325000 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.325157 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-registry-certificates\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.327488 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-registry-certificates\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.330020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.331005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-registry-tls\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.348045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd5n8\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-kube-api-access-jd5n8\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.350088 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c-bound-sa-token\") pod \"image-registry-66df7c8f76-fdbl6\" (UID: \"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.367606 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:48 crc kubenswrapper[4687]: I0312 16:07:48.849523 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fdbl6"] Mar 12 16:07:49 crc kubenswrapper[4687]: I0312 16:07:49.200981 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" event={"ID":"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c","Type":"ContainerStarted","Data":"d3d07750eea79092ee8ff14f2f023b200f2fd9c2abd297a590a3a0841dc07060"} Mar 12 16:07:49 crc kubenswrapper[4687]: I0312 16:07:49.202446 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:07:49 crc kubenswrapper[4687]: I0312 16:07:49.202471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" event={"ID":"9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c","Type":"ContainerStarted","Data":"4e1fffd863aa4880dd057e77ff405fe749207e2d481984dbd6ba04ddc3038a85"} Mar 12 16:07:49 crc kubenswrapper[4687]: I0312 16:07:49.228072 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" podStartSLOduration=1.228055096 podStartE2EDuration="1.228055096s" podCreationTimestamp="2026-03-12 16:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:07:49.221596193 +0000 UTC m=+318.185558557" watchObservedRunningTime="2026-03-12 16:07:49.228055096 +0000 UTC m=+318.192017440" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.145278 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555528-7r7ld"] Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.146497 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.149775 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.150017 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.150305 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.160883 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555528-7r7ld"] Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.299751 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6j2v\" (UniqueName: \"kubernetes.io/projected/08ee4eba-7b39-4d7e-acf0-b5dad82088d2-kube-api-access-g6j2v\") pod \"auto-csr-approver-29555528-7r7ld\" (UID: \"08ee4eba-7b39-4d7e-acf0-b5dad82088d2\") " pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.400814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6j2v\" (UniqueName: \"kubernetes.io/projected/08ee4eba-7b39-4d7e-acf0-b5dad82088d2-kube-api-access-g6j2v\") pod \"auto-csr-approver-29555528-7r7ld\" (UID: \"08ee4eba-7b39-4d7e-acf0-b5dad82088d2\") " pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.427276 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6j2v\" (UniqueName: \"kubernetes.io/projected/08ee4eba-7b39-4d7e-acf0-b5dad82088d2-kube-api-access-g6j2v\") pod \"auto-csr-approver-29555528-7r7ld\" (UID: \"08ee4eba-7b39-4d7e-acf0-b5dad82088d2\") " pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.481953 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:00 crc kubenswrapper[4687]: I0312 16:08:00.989687 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555528-7r7ld"] Mar 12 16:08:01 crc kubenswrapper[4687]: I0312 16:08:01.292454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" event={"ID":"08ee4eba-7b39-4d7e-acf0-b5dad82088d2","Type":"ContainerStarted","Data":"0e3821e7e3adc5a285a01b9e48736a7dad4ad66d34423d553cb483c471cbca7f"} Mar 12 16:08:03 crc kubenswrapper[4687]: I0312 16:08:03.306322 4687 generic.go:334] "Generic (PLEG): container finished" podID="08ee4eba-7b39-4d7e-acf0-b5dad82088d2" containerID="e333aa997746b252319becf7b8923d4f519e3e34aa40e34198848218d38a9fd8" exitCode=0 Mar 12 16:08:03 crc kubenswrapper[4687]: I0312 16:08:03.306445 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" event={"ID":"08ee4eba-7b39-4d7e-acf0-b5dad82088d2","Type":"ContainerDied","Data":"e333aa997746b252319becf7b8923d4f519e3e34aa40e34198848218d38a9fd8"} Mar 12 16:08:04 crc kubenswrapper[4687]: I0312 16:08:04.618811 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:04 crc kubenswrapper[4687]: I0312 16:08:04.665635 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6j2v\" (UniqueName: \"kubernetes.io/projected/08ee4eba-7b39-4d7e-acf0-b5dad82088d2-kube-api-access-g6j2v\") pod \"08ee4eba-7b39-4d7e-acf0-b5dad82088d2\" (UID: \"08ee4eba-7b39-4d7e-acf0-b5dad82088d2\") " Mar 12 16:08:04 crc kubenswrapper[4687]: I0312 16:08:04.671576 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ee4eba-7b39-4d7e-acf0-b5dad82088d2-kube-api-access-g6j2v" (OuterVolumeSpecName: "kube-api-access-g6j2v") pod "08ee4eba-7b39-4d7e-acf0-b5dad82088d2" (UID: "08ee4eba-7b39-4d7e-acf0-b5dad82088d2"). InnerVolumeSpecName "kube-api-access-g6j2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:04 crc kubenswrapper[4687]: I0312 16:08:04.768660 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6j2v\" (UniqueName: \"kubernetes.io/projected/08ee4eba-7b39-4d7e-acf0-b5dad82088d2-kube-api-access-g6j2v\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:05 crc kubenswrapper[4687]: I0312 16:08:05.331128 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" event={"ID":"08ee4eba-7b39-4d7e-acf0-b5dad82088d2","Type":"ContainerDied","Data":"0e3821e7e3adc5a285a01b9e48736a7dad4ad66d34423d553cb483c471cbca7f"} Mar 12 16:08:05 crc kubenswrapper[4687]: I0312 16:08:05.331189 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3821e7e3adc5a285a01b9e48736a7dad4ad66d34423d553cb483c471cbca7f" Mar 12 16:08:05 crc kubenswrapper[4687]: I0312 16:08:05.331231 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555528-7r7ld" Mar 12 16:08:08 crc kubenswrapper[4687]: I0312 16:08:08.375040 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" Mar 12 16:08:08 crc kubenswrapper[4687]: I0312 16:08:08.458349 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kplxq"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.308813 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdstv"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.311078 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdstv" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="registry-server" containerID="cri-o://d54bfc3986b0a496eaab7aa8aece543c66ae5b1b3648f7204a4fe9cc76eae88b" gracePeriod=30 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.319580 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5vb4"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.319822 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5vb4" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="registry-server" containerID="cri-o://3f64d7c163d43b6c3655c1c3f71905d9ac8438aee67f768cde297fd713b8bdf2" gracePeriod=30 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.338873 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lmlgb"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.339285 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" podUID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" containerName="marketplace-operator" containerID="cri-o://fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8" gracePeriod=30 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.352815 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzpsd"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.353107 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vzpsd" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="registry-server" containerID="cri-o://7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106" gracePeriod=30 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.364700 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8z8d"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.364965 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8z8d" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="registry-server" containerID="cri-o://3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea" gracePeriod=30 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.372429 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kbtm9"] Mar 12 16:08:22 crc kubenswrapper[4687]: E0312 16:08:22.372684 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ee4eba-7b39-4d7e-acf0-b5dad82088d2" containerName="oc" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.372704 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ee4eba-7b39-4d7e-acf0-b5dad82088d2" containerName="oc" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.372842 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ee4eba-7b39-4d7e-acf0-b5dad82088d2" containerName="oc" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.373388 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.384107 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kbtm9"] Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.420300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1680cce-a286-460f-9e3f-145d9b364995-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.420402 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a1680cce-a286-460f-9e3f-145d9b364995-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.420430 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qns5v\" (UniqueName: \"kubernetes.io/projected/a1680cce-a286-460f-9e3f-145d9b364995-kube-api-access-qns5v\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.460176 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerID="3f64d7c163d43b6c3655c1c3f71905d9ac8438aee67f768cde297fd713b8bdf2" exitCode=0 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.460331 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerDied","Data":"3f64d7c163d43b6c3655c1c3f71905d9ac8438aee67f768cde297fd713b8bdf2"} Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.462318 4687 generic.go:334] "Generic (PLEG): container finished" podID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerID="d54bfc3986b0a496eaab7aa8aece543c66ae5b1b3648f7204a4fe9cc76eae88b" exitCode=0 Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.462370 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstv" event={"ID":"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06","Type":"ContainerDied","Data":"d54bfc3986b0a496eaab7aa8aece543c66ae5b1b3648f7204a4fe9cc76eae88b"} Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.521208 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qns5v\" (UniqueName: \"kubernetes.io/projected/a1680cce-a286-460f-9e3f-145d9b364995-kube-api-access-qns5v\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.521283 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1680cce-a286-460f-9e3f-145d9b364995-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.521393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a1680cce-a286-460f-9e3f-145d9b364995-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.523407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1680cce-a286-460f-9e3f-145d9b364995-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.527687 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a1680cce-a286-460f-9e3f-145d9b364995-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.541197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qns5v\" (UniqueName: \"kubernetes.io/projected/a1680cce-a286-460f-9e3f-145d9b364995-kube-api-access-qns5v\") pod \"marketplace-operator-79b997595-kbtm9\" (UID: \"a1680cce-a286-460f-9e3f-145d9b364995\") " pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.753461 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.762111 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.826620 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-catalog-content\") pod \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.826685 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-utilities\") pod \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.826733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtr5w\" (UniqueName: \"kubernetes.io/projected/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-kube-api-access-gtr5w\") pod \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\" (UID: \"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06\") " Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.829277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-utilities" (OuterVolumeSpecName: "utilities") pod "c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" (UID: "c19b60e6-9340-4f6a-a9fc-5a804e5d5a06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.832587 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-kube-api-access-gtr5w" (OuterVolumeSpecName: "kube-api-access-gtr5w") pod "c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" (UID: "c19b60e6-9340-4f6a-a9fc-5a804e5d5a06"). InnerVolumeSpecName "kube-api-access-gtr5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.883221 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.896011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" (UID: "c19b60e6-9340-4f6a-a9fc-5a804e5d5a06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.908845 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.912392 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.931328 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-catalog-content\") pod \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.933544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-utilities\") pod \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.933581 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mls\" (UniqueName: \"kubernetes.io/projected/1e9d2fa4-739e-470f-95c2-0e547a7e147e-kube-api-access-r4mls\") pod \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\" (UID: \"1e9d2fa4-739e-470f-95c2-0e547a7e147e\") " Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.933863 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtr5w\" (UniqueName: \"kubernetes.io/projected/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-kube-api-access-gtr5w\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.933876 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.933889 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.934844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-utilities" (OuterVolumeSpecName: "utilities") pod "1e9d2fa4-739e-470f-95c2-0e547a7e147e" (UID: "1e9d2fa4-739e-470f-95c2-0e547a7e147e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.936823 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.939304 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9d2fa4-739e-470f-95c2-0e547a7e147e-kube-api-access-r4mls" (OuterVolumeSpecName: "kube-api-access-r4mls") pod "1e9d2fa4-739e-470f-95c2-0e547a7e147e" (UID: "1e9d2fa4-739e-470f-95c2-0e547a7e147e"). InnerVolumeSpecName "kube-api-access-r4mls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:22 crc kubenswrapper[4687]: I0312 16:08:22.958128 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e9d2fa4-739e-470f-95c2-0e547a7e147e" (UID: "1e9d2fa4-739e-470f-95c2-0e547a7e147e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.034885 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lkjj\" (UniqueName: \"kubernetes.io/projected/0cf80ba9-ad15-48b8-ae26-f734183c8d30-kube-api-access-6lkjj\") pod \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.034950 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-operator-metrics\") pod \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.034981 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-catalog-content\") pod \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035000 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-utilities\") pod \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035035 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpxj9\" (UniqueName: \"kubernetes.io/projected/09abed8f-f9fb-41fd-b864-daf0e5026ae5-kube-api-access-wpxj9\") pod \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035089 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjcpd\" (UniqueName: \"kubernetes.io/projected/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-kube-api-access-jjcpd\") pod \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\" (UID: \"b0ddda34-ec8b-46b6-9b04-dcd34d30177c\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035104 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-catalog-content\") pod \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035124 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-utilities\") pod \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\" (UID: \"09abed8f-f9fb-41fd-b864-daf0e5026ae5\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035165 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-trusted-ca\") pod \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\" (UID: \"0cf80ba9-ad15-48b8-ae26-f734183c8d30\") " Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035349 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035373 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mls\" (UniqueName: \"kubernetes.io/projected/1e9d2fa4-739e-470f-95c2-0e547a7e147e-kube-api-access-r4mls\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.035384 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9d2fa4-739e-470f-95c2-0e547a7e147e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.036079 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0cf80ba9-ad15-48b8-ae26-f734183c8d30" (UID: "0cf80ba9-ad15-48b8-ae26-f734183c8d30"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.036509 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-utilities" (OuterVolumeSpecName: "utilities") pod "b0ddda34-ec8b-46b6-9b04-dcd34d30177c" (UID: "b0ddda34-ec8b-46b6-9b04-dcd34d30177c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.036987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-utilities" (OuterVolumeSpecName: "utilities") pod "09abed8f-f9fb-41fd-b864-daf0e5026ae5" (UID: "09abed8f-f9fb-41fd-b864-daf0e5026ae5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.038473 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-kube-api-access-jjcpd" (OuterVolumeSpecName: "kube-api-access-jjcpd") pod "b0ddda34-ec8b-46b6-9b04-dcd34d30177c" (UID: "b0ddda34-ec8b-46b6-9b04-dcd34d30177c"). InnerVolumeSpecName "kube-api-access-jjcpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.038542 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09abed8f-f9fb-41fd-b864-daf0e5026ae5-kube-api-access-wpxj9" (OuterVolumeSpecName: "kube-api-access-wpxj9") pod "09abed8f-f9fb-41fd-b864-daf0e5026ae5" (UID: "09abed8f-f9fb-41fd-b864-daf0e5026ae5"). InnerVolumeSpecName "kube-api-access-wpxj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.038969 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0cf80ba9-ad15-48b8-ae26-f734183c8d30" (UID: "0cf80ba9-ad15-48b8-ae26-f734183c8d30"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.039220 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf80ba9-ad15-48b8-ae26-f734183c8d30-kube-api-access-6lkjj" (OuterVolumeSpecName: "kube-api-access-6lkjj") pod "0cf80ba9-ad15-48b8-ae26-f734183c8d30" (UID: "0cf80ba9-ad15-48b8-ae26-f734183c8d30"). InnerVolumeSpecName "kube-api-access-6lkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.089237 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0ddda34-ec8b-46b6-9b04-dcd34d30177c" (UID: "b0ddda34-ec8b-46b6-9b04-dcd34d30177c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136431 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjcpd\" (UniqueName: \"kubernetes.io/projected/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-kube-api-access-jjcpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136469 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136485 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136493 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lkjj\" (UniqueName: \"kubernetes.io/projected/0cf80ba9-ad15-48b8-ae26-f734183c8d30-kube-api-access-6lkjj\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136503 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cf80ba9-ad15-48b8-ae26-f734183c8d30-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136517 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136536 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ddda34-ec8b-46b6-9b04-dcd34d30177c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.136549 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpxj9\" (UniqueName: \"kubernetes.io/projected/09abed8f-f9fb-41fd-b864-daf0e5026ae5-kube-api-access-wpxj9\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.155050 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09abed8f-f9fb-41fd-b864-daf0e5026ae5" (UID: "09abed8f-f9fb-41fd-b864-daf0e5026ae5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.228911 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kbtm9"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.237307 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09abed8f-f9fb-41fd-b864-daf0e5026ae5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:23 crc kubenswrapper[4687]: W0312 16:08:23.237507 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1680cce_a286_460f_9e3f_145d9b364995.slice/crio-be52fc9d3e6111b18c0350b3f6ac6dae5987f311253095954536dcfe559808be WatchSource:0}: Error finding container be52fc9d3e6111b18c0350b3f6ac6dae5987f311253095954536dcfe559808be: Status 404 returned error can't find the container with id be52fc9d3e6111b18c0350b3f6ac6dae5987f311253095954536dcfe559808be Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.468124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" event={"ID":"a1680cce-a286-460f-9e3f-145d9b364995","Type":"ContainerStarted","Data":"2194eae52541904551456546389f27d22048c7a40614c3e06c18fe979e6a2432"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.468164 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" event={"ID":"a1680cce-a286-460f-9e3f-145d9b364995","Type":"ContainerStarted","Data":"be52fc9d3e6111b18c0350b3f6ac6dae5987f311253095954536dcfe559808be"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.468346 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.469508 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": dial tcp 10.217.0.78:8080: connect: connection refused" start-of-body= Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.469891 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": dial tcp 10.217.0.78:8080: connect: connection refused" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.471679 4687 generic.go:334] "Generic (PLEG): container finished" podID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerID="7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106" exitCode=0 Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.471740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzpsd" event={"ID":"1e9d2fa4-739e-470f-95c2-0e547a7e147e","Type":"ContainerDied","Data":"7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.471765 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzpsd" event={"ID":"1e9d2fa4-739e-470f-95c2-0e547a7e147e","Type":"ContainerDied","Data":"d0784a137f4466891cb57079ab18d4877139f21ab8db7784f25010d34832391d"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.471783 4687 scope.go:117] "RemoveContainer" containerID="7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.471796 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzpsd" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.476779 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vb4" event={"ID":"b0ddda34-ec8b-46b6-9b04-dcd34d30177c","Type":"ContainerDied","Data":"08e9fdbbd71050e3bbd745ebe079ce2d568bf1029ff5d7a99d48e7e8e257cfb4"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.476811 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vb4" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.479930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstv" event={"ID":"c19b60e6-9340-4f6a-a9fc-5a804e5d5a06","Type":"ContainerDied","Data":"23f0513a44f0904283d2e4b9f520503be4aa10ffe2263b249fdd0ec4b6a787df"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.479946 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstv" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.481230 4687 generic.go:334] "Generic (PLEG): container finished" podID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" containerID="fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8" exitCode=0 Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.481264 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" event={"ID":"0cf80ba9-ad15-48b8-ae26-f734183c8d30","Type":"ContainerDied","Data":"fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.481285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" event={"ID":"0cf80ba9-ad15-48b8-ae26-f734183c8d30","Type":"ContainerDied","Data":"f3864b23019c544db1c0e9f3d09634a63335e664770d5941b5e212e6090701e2"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.481253 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lmlgb" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.483004 4687 generic.go:334] "Generic (PLEG): container finished" podID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerID="3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea" exitCode=0 Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.483030 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8z8d" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.483039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerDied","Data":"3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.483175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8z8d" event={"ID":"09abed8f-f9fb-41fd-b864-daf0e5026ae5","Type":"ContainerDied","Data":"5c46d05ce32d24b94184055db1aadf3ca5f656b63bd8b7e10b42a719a8cde47a"} Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.490609 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podStartSLOduration=1.490586162 podStartE2EDuration="1.490586162s" podCreationTimestamp="2026-03-12 16:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:08:23.482527145 +0000 UTC m=+352.446489489" watchObservedRunningTime="2026-03-12 16:08:23.490586162 +0000 UTC m=+352.454548506" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.493087 4687 scope.go:117] "RemoveContainer" containerID="22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.520561 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzpsd"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.522632 4687 scope.go:117] "RemoveContainer" containerID="4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.526289 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzpsd"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.546544 4687 scope.go:117] "RemoveContainer" containerID="7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.546886 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106\": container with ID starting with 7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106 not found: ID does not exist" containerID="7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.546920 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106"} err="failed to get container status \"7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106\": rpc error: code = NotFound desc = could not find container \"7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106\": container with ID starting with 7a718af660fa4e478c65e072950e71a0ef268f08df5dbaec651e112af1be1106 not found: ID does not exist" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.546939 4687 scope.go:117] "RemoveContainer" containerID="22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.547261 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1\": container with ID starting with 22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1 not found: ID does not exist" containerID="22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.547298 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1"} err="failed to get container status \"22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1\": rpc error: code = NotFound desc = could not find container \"22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1\": container with ID starting with 22148df37c951b6fb2c73b45a294f0637f5d586d61e3806ec07608b0a8ef8ad1 not found: ID does not exist" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.547320 4687 scope.go:117] "RemoveContainer" containerID="4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.549172 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9\": container with ID starting with 4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9 not found: ID does not exist" containerID="4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.549197 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9"} err="failed to get container status \"4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9\": rpc error: code = NotFound desc = could not find container \"4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9\": container with ID starting with 4c917a2df28576c2c5981e21566fe5ab67ec2940bb7e5e6ca89f239edf34fad9 not found: ID does not exist" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.549212 4687 scope.go:117] "RemoveContainer" containerID="3f64d7c163d43b6c3655c1c3f71905d9ac8438aee67f768cde297fd713b8bdf2" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.550144 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8z8d"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.558834 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8z8d"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.568450 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdstv"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.584306 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdstv"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.584409 4687 scope.go:117] "RemoveContainer" containerID="95b4626731f5ce90062aa36f6090b63df229d141f2280ae7f65d8a7b20e4acd3" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.592263 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5vb4"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.596844 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5vb4"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.601169 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lmlgb"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.602732 4687 scope.go:117] "RemoveContainer" containerID="9a692017b620abd6c55cbe7b4259a3ae58cb16eb354a3cefc13fc74864c34af2" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.605606 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lmlgb"] Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.615005 4687 scope.go:117] "RemoveContainer" containerID="d54bfc3986b0a496eaab7aa8aece543c66ae5b1b3648f7204a4fe9cc76eae88b" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.634284 4687 scope.go:117] "RemoveContainer" containerID="de37ba5145ab568517dd5647d7a8f60e5cafd7bce7f2d93edb3fa5d4625c9661" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.652492 4687 scope.go:117] "RemoveContainer" containerID="d13e2fb21ec3b7b35a2a52217fef55a6dd8879b69574fef901aa70b04a671191" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.663757 4687 scope.go:117] "RemoveContainer" containerID="fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.690763 4687 scope.go:117] "RemoveContainer" containerID="fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.691413 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8\": container with ID starting with fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8 not found: ID does not exist" containerID="fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.691447 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8"} err="failed to get container status \"fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8\": rpc error: code = NotFound desc = could not find container \"fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8\": container with ID starting with fe2f70c873283c17e63175019a5036e9d7caf64f8a37cd2f444e765a855247e8 not found: ID does not exist" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.691471 4687 scope.go:117] "RemoveContainer" containerID="3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.707338 4687 scope.go:117] "RemoveContainer" containerID="afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.728601 4687 scope.go:117] "RemoveContainer" containerID="7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.740884 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" path="/var/lib/kubelet/pods/09abed8f-f9fb-41fd-b864-daf0e5026ae5/volumes" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.741671 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" path="/var/lib/kubelet/pods/0cf80ba9-ad15-48b8-ae26-f734183c8d30/volumes" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.742216 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" path="/var/lib/kubelet/pods/1e9d2fa4-739e-470f-95c2-0e547a7e147e/volumes" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.743418 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" path="/var/lib/kubelet/pods/b0ddda34-ec8b-46b6-9b04-dcd34d30177c/volumes" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.743468 4687 scope.go:117] "RemoveContainer" containerID="3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.743736 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea\": container with ID starting with 3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea not found: ID does not exist" containerID="3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.743776 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea"} err="failed to get container status \"3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea\": rpc error: code = NotFound desc = could not find container \"3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea\": container with ID starting with 3844a1e9d3b351c7cf02c867b1dfe4557df7c4027c4cc43147bc525894d27fea not found: ID does not exist" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.743801 4687 scope.go:117] "RemoveContainer" containerID="afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.744167 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1\": container with ID starting with afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1 not found: ID does not exist" containerID="afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.744191 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1"} err="failed to get container status \"afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1\": rpc error: code = NotFound desc = could not find container \"afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1\": container with ID starting with afa997884f204341862a5c2b356fa469613357472555e7d5138b3496509c97e1 not found: ID does not exist" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.744205 4687 scope.go:117] "RemoveContainer" containerID="7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.744244 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" path="/var/lib/kubelet/pods/c19b60e6-9340-4f6a-a9fc-5a804e5d5a06/volumes" Mar 12 16:08:23 crc kubenswrapper[4687]: E0312 16:08:23.744519 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9\": container with ID starting with 7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9 not found: ID does not exist" containerID="7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9" Mar 12 16:08:23 crc kubenswrapper[4687]: I0312 16:08:23.744541 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9"} err="failed to get container status \"7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9\": rpc error: code = NotFound desc = could not find container \"7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9\": container with ID starting with 7b0647c4ddc2bb43634ae4b0f0c57759019af5a57b87daedabf8cf8788f812a9 not found: ID does not exist" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.498314 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529635 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g9hdt"] Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529856 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529870 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529886 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529895 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529903 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529913 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529922 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529929 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529942 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529948 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529955 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529960 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529971 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529977 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529984 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" containerName="marketplace-operator" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.529990 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" containerName="marketplace-operator" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.529999 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530005 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.530013 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530018 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.530025 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530031 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.530039 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530044 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="extract-utilities" Mar 12 16:08:24 crc kubenswrapper[4687]: E0312 16:08:24.530051 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530056 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="extract-content" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530150 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ddda34-ec8b-46b6-9b04-dcd34d30177c" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530161 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19b60e6-9340-4f6a-a9fc-5a804e5d5a06" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530170 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="09abed8f-f9fb-41fd-b864-daf0e5026ae5" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530178 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf80ba9-ad15-48b8-ae26-f734183c8d30" containerName="marketplace-operator" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530188 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9d2fa4-739e-470f-95c2-0e547a7e147e" containerName="registry-server" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.530904 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.533702 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.540668 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9hdt"] Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.659585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-catalog-content\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.659630 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-utilities\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.659670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6kkv\" (UniqueName: \"kubernetes.io/projected/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-kube-api-access-q6kkv\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.729155 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-22nvv"] Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.730070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.731845 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.739119 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22nvv"] Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.760766 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-catalog-content\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.760809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-utilities\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.760853 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6kkv\" (UniqueName: \"kubernetes.io/projected/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-kube-api-access-q6kkv\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.761482 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-catalog-content\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.761544 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-utilities\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.777957 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6kkv\" (UniqueName: \"kubernetes.io/projected/50316ae5-82e3-4dbc-ba50-dd2046abc0e1-kube-api-access-q6kkv\") pod \"redhat-marketplace-g9hdt\" (UID: \"50316ae5-82e3-4dbc-ba50-dd2046abc0e1\") " pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.858743 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.862481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-utilities\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.862546 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-catalog-content\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.862639 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw2xx\" (UniqueName: \"kubernetes.io/projected/6c4b07aa-c412-4479-960e-1f79f4e68d96-kube-api-access-kw2xx\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.963576 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw2xx\" (UniqueName: \"kubernetes.io/projected/6c4b07aa-c412-4479-960e-1f79f4e68d96-kube-api-access-kw2xx\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.963660 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-utilities\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.963692 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-catalog-content\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.964185 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-catalog-content\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.964240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-utilities\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:24 crc kubenswrapper[4687]: I0312 16:08:24.980628 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw2xx\" (UniqueName: \"kubernetes.io/projected/6c4b07aa-c412-4479-960e-1f79f4e68d96-kube-api-access-kw2xx\") pod \"redhat-operators-22nvv\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:25 crc kubenswrapper[4687]: I0312 16:08:25.048178 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:25 crc kubenswrapper[4687]: I0312 16:08:25.285821 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9hdt"] Mar 12 16:08:25 crc kubenswrapper[4687]: W0312 16:08:25.289793 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50316ae5_82e3_4dbc_ba50_dd2046abc0e1.slice/crio-e87fd8d393d12c87990fb63c97095c5a7a7fe31f066dcaf5619eaa7a1af5077b WatchSource:0}: Error finding container e87fd8d393d12c87990fb63c97095c5a7a7fe31f066dcaf5619eaa7a1af5077b: Status 404 returned error can't find the container with id e87fd8d393d12c87990fb63c97095c5a7a7fe31f066dcaf5619eaa7a1af5077b Mar 12 16:08:25 crc kubenswrapper[4687]: I0312 16:08:25.418859 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-22nvv"] Mar 12 16:08:25 crc kubenswrapper[4687]: W0312 16:08:25.497935 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4b07aa_c412_4479_960e_1f79f4e68d96.slice/crio-e10c551c83fb6e3e38c69577d9af8929cad945afd4988ed20daee6b23492371d WatchSource:0}: Error finding container e10c551c83fb6e3e38c69577d9af8929cad945afd4988ed20daee6b23492371d: Status 404 returned error can't find the container with id e10c551c83fb6e3e38c69577d9af8929cad945afd4988ed20daee6b23492371d Mar 12 16:08:25 crc kubenswrapper[4687]: I0312 16:08:25.501410 4687 generic.go:334] "Generic (PLEG): container finished" podID="50316ae5-82e3-4dbc-ba50-dd2046abc0e1" containerID="aec822170fe6cd7e14847e70d1cd764a62f1083e2348b92c68699cb87b21919a" exitCode=0 Mar 12 16:08:25 crc kubenswrapper[4687]: I0312 16:08:25.501514 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9hdt" event={"ID":"50316ae5-82e3-4dbc-ba50-dd2046abc0e1","Type":"ContainerDied","Data":"aec822170fe6cd7e14847e70d1cd764a62f1083e2348b92c68699cb87b21919a"} Mar 12 16:08:25 crc kubenswrapper[4687]: I0312 16:08:25.501540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9hdt" event={"ID":"50316ae5-82e3-4dbc-ba50-dd2046abc0e1","Type":"ContainerStarted","Data":"e87fd8d393d12c87990fb63c97095c5a7a7fe31f066dcaf5619eaa7a1af5077b"} Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.510629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9hdt" event={"ID":"50316ae5-82e3-4dbc-ba50-dd2046abc0e1","Type":"ContainerStarted","Data":"364d5c85f55a3666a3b6c21609514daee07436c7a7dff0887a3a3ef57fc5a414"} Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.512766 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerID="acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da" exitCode=0 Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.512795 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22nvv" event={"ID":"6c4b07aa-c412-4479-960e-1f79f4e68d96","Type":"ContainerDied","Data":"acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da"} Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.512811 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22nvv" event={"ID":"6c4b07aa-c412-4479-960e-1f79f4e68d96","Type":"ContainerStarted","Data":"e10c551c83fb6e3e38c69577d9af8929cad945afd4988ed20daee6b23492371d"} Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.932171 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lmq5m"] Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.933474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.936885 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.947281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmq5m"] Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.991464 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-utilities\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.991531 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpdcv\" (UniqueName: \"kubernetes.io/projected/268cba67-b44a-4d37-8567-475c1fd27a20-kube-api-access-cpdcv\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:26 crc kubenswrapper[4687]: I0312 16:08:26.991610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-catalog-content\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.092659 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdcv\" (UniqueName: \"kubernetes.io/projected/268cba67-b44a-4d37-8567-475c1fd27a20-kube-api-access-cpdcv\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.092737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-utilities\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.092789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-catalog-content\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.093418 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-utilities\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.093822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-catalog-content\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.127561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdcv\" (UniqueName: \"kubernetes.io/projected/268cba67-b44a-4d37-8567-475c1fd27a20-kube-api-access-cpdcv\") pod \"community-operators-lmq5m\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.133769 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z2wpn"] Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.196488 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2wpn"] Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.196647 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.198897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.257009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.297255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-catalog-content\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.297642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-utilities\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:27 crc kubenswrapper[4687]: I0312 16:08:27.297714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shznq\" (UniqueName: \"kubernetes.io/projected/dea71408-c307-4443-a026-547ca7196ff6-kube-api-access-shznq\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.399052 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shznq\" (UniqueName: \"kubernetes.io/projected/dea71408-c307-4443-a026-547ca7196ff6-kube-api-access-shznq\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.399185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-catalog-content\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.399330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-utilities\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.399829 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-catalog-content\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.400004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-utilities\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.419788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shznq\" (UniqueName: \"kubernetes.io/projected/dea71408-c307-4443-a026-547ca7196ff6-kube-api-access-shznq\") pod \"certified-operators-z2wpn\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.515906 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.520993 4687 generic.go:334] "Generic (PLEG): container finished" podID="50316ae5-82e3-4dbc-ba50-dd2046abc0e1" containerID="364d5c85f55a3666a3b6c21609514daee07436c7a7dff0887a3a3ef57fc5a414" exitCode=0 Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:27.521047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9hdt" event={"ID":"50316ae5-82e3-4dbc-ba50-dd2046abc0e1","Type":"ContainerDied","Data":"364d5c85f55a3666a3b6c21609514daee07436c7a7dff0887a3a3ef57fc5a414"} Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.400318 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2wpn"] Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.404164 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmq5m"] Mar 12 16:08:28 crc kubenswrapper[4687]: W0312 16:08:28.412476 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddea71408_c307_4443_a026_547ca7196ff6.slice/crio-d08150890bd4836876b824f0f41ca96207105c8ffd8201a28093b4bdf88ba23b WatchSource:0}: Error finding container d08150890bd4836876b824f0f41ca96207105c8ffd8201a28093b4bdf88ba23b: Status 404 returned error can't find the container with id d08150890bd4836876b824f0f41ca96207105c8ffd8201a28093b4bdf88ba23b Mar 12 16:08:28 crc kubenswrapper[4687]: W0312 16:08:28.413628 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod268cba67_b44a_4d37_8567_475c1fd27a20.slice/crio-87951ebbfc6dd23f06ac654c4cda3fafc100b1ac5dac0f638204f7131e9b5114 WatchSource:0}: Error finding container 87951ebbfc6dd23f06ac654c4cda3fafc100b1ac5dac0f638204f7131e9b5114: Status 404 returned error can't find the container with id 87951ebbfc6dd23f06ac654c4cda3fafc100b1ac5dac0f638204f7131e9b5114 Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.528435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerStarted","Data":"87951ebbfc6dd23f06ac654c4cda3fafc100b1ac5dac0f638204f7131e9b5114"} Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.531593 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9hdt" event={"ID":"50316ae5-82e3-4dbc-ba50-dd2046abc0e1","Type":"ContainerStarted","Data":"5a470bb3f8ffd50d5bd1f58185a87021af7c66890308a59f8b96d8f4ece53e9d"} Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.533636 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerID="8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac" exitCode=0 Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.533730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22nvv" event={"ID":"6c4b07aa-c412-4479-960e-1f79f4e68d96","Type":"ContainerDied","Data":"8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac"} Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.535172 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerStarted","Data":"d08150890bd4836876b824f0f41ca96207105c8ffd8201a28093b4bdf88ba23b"} Mar 12 16:08:28 crc kubenswrapper[4687]: I0312 16:08:28.558813 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g9hdt" podStartSLOduration=2.124377541 podStartE2EDuration="4.558790178s" podCreationTimestamp="2026-03-12 16:08:24 +0000 UTC" firstStartedPulling="2026-03-12 16:08:25.502561069 +0000 UTC m=+354.466523413" lastFinishedPulling="2026-03-12 16:08:27.936973706 +0000 UTC m=+356.900936050" observedRunningTime="2026-03-12 16:08:28.555335254 +0000 UTC m=+357.519297598" watchObservedRunningTime="2026-03-12 16:08:28.558790178 +0000 UTC m=+357.522752532" Mar 12 16:08:29 crc kubenswrapper[4687]: I0312 16:08:29.541744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22nvv" event={"ID":"6c4b07aa-c412-4479-960e-1f79f4e68d96","Type":"ContainerStarted","Data":"db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93"} Mar 12 16:08:29 crc kubenswrapper[4687]: I0312 16:08:29.543024 4687 generic.go:334] "Generic (PLEG): container finished" podID="dea71408-c307-4443-a026-547ca7196ff6" containerID="e185f136b839faf6ee841bbfe474dd03947d7af81ccf2b622a141029229b9ddf" exitCode=0 Mar 12 16:08:29 crc kubenswrapper[4687]: I0312 16:08:29.543079 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerDied","Data":"e185f136b839faf6ee841bbfe474dd03947d7af81ccf2b622a141029229b9ddf"} Mar 12 16:08:29 crc kubenswrapper[4687]: I0312 16:08:29.545219 4687 generic.go:334] "Generic (PLEG): container finished" podID="268cba67-b44a-4d37-8567-475c1fd27a20" containerID="f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144" exitCode=0 Mar 12 16:08:29 crc kubenswrapper[4687]: I0312 16:08:29.546015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerDied","Data":"f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144"} Mar 12 16:08:29 crc kubenswrapper[4687]: I0312 16:08:29.562819 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-22nvv" podStartSLOduration=3.082687523 podStartE2EDuration="5.562799848s" podCreationTimestamp="2026-03-12 16:08:24 +0000 UTC" firstStartedPulling="2026-03-12 16:08:26.514712448 +0000 UTC m=+355.478674792" lastFinishedPulling="2026-03-12 16:08:28.994824763 +0000 UTC m=+357.958787117" observedRunningTime="2026-03-12 16:08:29.560820314 +0000 UTC m=+358.524782668" watchObservedRunningTime="2026-03-12 16:08:29.562799848 +0000 UTC m=+358.526762202" Mar 12 16:08:30 crc kubenswrapper[4687]: I0312 16:08:30.553063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerStarted","Data":"2d423b1dfb8802084cbe56a38bfd1bec04bbb3ef7a1a382a73e879837471f9ea"} Mar 12 16:08:30 crc kubenswrapper[4687]: I0312 16:08:30.555334 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerStarted","Data":"dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee"} Mar 12 16:08:31 crc kubenswrapper[4687]: I0312 16:08:31.569577 4687 generic.go:334] "Generic (PLEG): container finished" podID="dea71408-c307-4443-a026-547ca7196ff6" containerID="2d423b1dfb8802084cbe56a38bfd1bec04bbb3ef7a1a382a73e879837471f9ea" exitCode=0 Mar 12 16:08:31 crc kubenswrapper[4687]: I0312 16:08:31.569962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerDied","Data":"2d423b1dfb8802084cbe56a38bfd1bec04bbb3ef7a1a382a73e879837471f9ea"} Mar 12 16:08:31 crc kubenswrapper[4687]: I0312 16:08:31.577504 4687 generic.go:334] "Generic (PLEG): container finished" podID="268cba67-b44a-4d37-8567-475c1fd27a20" containerID="dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee" exitCode=0 Mar 12 16:08:31 crc kubenswrapper[4687]: I0312 16:08:31.577544 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerDied","Data":"dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee"} Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.512083 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" podUID="4907a0ff-69b9-4d86-8d43-b39ff4af8567" containerName="registry" containerID="cri-o://006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c" gracePeriod=30 Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.589540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerStarted","Data":"f1693940ab2b78af019a9bf4c1175535442dd66762f217cbb6b26da157373477"} Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.591760 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerStarted","Data":"4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a"} Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.616339 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z2wpn" podStartSLOduration=3.573491157 podStartE2EDuration="6.616325266s" podCreationTimestamp="2026-03-12 16:08:27 +0000 UTC" firstStartedPulling="2026-03-12 16:08:29.544558227 +0000 UTC m=+358.508520571" lastFinishedPulling="2026-03-12 16:08:32.587392336 +0000 UTC m=+361.551354680" observedRunningTime="2026-03-12 16:08:33.612545214 +0000 UTC m=+362.576507548" watchObservedRunningTime="2026-03-12 16:08:33.616325266 +0000 UTC m=+362.580287610" Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.647098 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lmq5m" podStartSLOduration=4.573185418 podStartE2EDuration="7.647079163s" podCreationTimestamp="2026-03-12 16:08:26 +0000 UTC" firstStartedPulling="2026-03-12 16:08:29.546719765 +0000 UTC m=+358.510682109" lastFinishedPulling="2026-03-12 16:08:32.62061351 +0000 UTC m=+361.584575854" observedRunningTime="2026-03-12 16:08:33.642756267 +0000 UTC m=+362.606718631" watchObservedRunningTime="2026-03-12 16:08:33.647079163 +0000 UTC m=+362.611041497" Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.929838 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.996661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-certificates\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:33 crc kubenswrapper[4687]: I0312 16:08:33.996733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4907a0ff-69b9-4d86-8d43-b39ff4af8567-ca-trust-extracted\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.996789 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-bound-sa-token\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.996923 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.996950 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-tls\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.996977 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4907a0ff-69b9-4d86-8d43-b39ff4af8567-installation-pull-secrets\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.996993 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wtpg\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-kube-api-access-5wtpg\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.997029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-trusted-ca\") pod \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\" (UID: \"4907a0ff-69b9-4d86-8d43-b39ff4af8567\") " Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.997455 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:33.997638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.003438 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-kube-api-access-5wtpg" (OuterVolumeSpecName: "kube-api-access-5wtpg") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "kube-api-access-5wtpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.004100 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4907a0ff-69b9-4d86-8d43-b39ff4af8567-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.006966 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.008451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.012550 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4907a0ff-69b9-4d86-8d43-b39ff4af8567-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.014736 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4907a0ff-69b9-4d86-8d43-b39ff4af8567" (UID: "4907a0ff-69b9-4d86-8d43-b39ff4af8567"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099039 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4907a0ff-69b9-4d86-8d43-b39ff4af8567-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099095 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wtpg\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-kube-api-access-5wtpg\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099113 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099129 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099144 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4907a0ff-69b9-4d86-8d43-b39ff4af8567-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099158 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.099172 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4907a0ff-69b9-4d86-8d43-b39ff4af8567-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.599714 4687 generic.go:334] "Generic (PLEG): container finished" podID="4907a0ff-69b9-4d86-8d43-b39ff4af8567" containerID="006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c" exitCode=0 Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.599801 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.599779 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" event={"ID":"4907a0ff-69b9-4d86-8d43-b39ff4af8567","Type":"ContainerDied","Data":"006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c"} Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.599869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kplxq" event={"ID":"4907a0ff-69b9-4d86-8d43-b39ff4af8567","Type":"ContainerDied","Data":"a3a1c879f0c2425c556d02b6a7698cdf29bc23dbdc5d0b80c27989f5015224f8"} Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.599900 4687 scope.go:117] "RemoveContainer" containerID="006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.631698 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kplxq"] Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.636783 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kplxq"] Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.639984 4687 scope.go:117] "RemoveContainer" containerID="006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c" Mar 12 16:08:34 crc kubenswrapper[4687]: E0312 16:08:34.640465 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c\": container with ID starting with 006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c not found: ID does not exist" containerID="006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.640493 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c"} err="failed to get container status \"006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c\": rpc error: code = NotFound desc = could not find container \"006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c\": container with ID starting with 006337fb052cf745cfd5c0362069b430485a66c383843fd2e20b19841a23ab1c not found: ID does not exist" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.859967 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.860089 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:34 crc kubenswrapper[4687]: I0312 16:08:34.912447 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:35 crc kubenswrapper[4687]: I0312 16:08:35.049073 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:35 crc kubenswrapper[4687]: I0312 16:08:35.049131 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:35 crc kubenswrapper[4687]: I0312 16:08:35.645939 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g9hdt" Mar 12 16:08:35 crc kubenswrapper[4687]: I0312 16:08:35.741430 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4907a0ff-69b9-4d86-8d43-b39ff4af8567" path="/var/lib/kubelet/pods/4907a0ff-69b9-4d86-8d43-b39ff4af8567/volumes" Mar 12 16:08:36 crc kubenswrapper[4687]: I0312 16:08:36.086260 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-22nvv" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="registry-server" probeResult="failure" output=< Mar 12 16:08:36 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:08:36 crc kubenswrapper[4687]: > Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.257770 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.259600 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.310669 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.516557 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.516827 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.552586 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.654971 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 16:08:37 crc kubenswrapper[4687]: I0312 16:08:37.667290 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:08:45 crc kubenswrapper[4687]: I0312 16:08:45.116015 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:08:45 crc kubenswrapper[4687]: I0312 16:08:45.191309 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:09:14 crc kubenswrapper[4687]: I0312 16:09:14.121904 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:09:14 crc kubenswrapper[4687]: I0312 16:09:14.122594 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.746809 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9"] Mar 12 16:09:15 crc kubenswrapper[4687]: E0312 16:09:15.747118 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4907a0ff-69b9-4d86-8d43-b39ff4af8567" containerName="registry" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.747138 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4907a0ff-69b9-4d86-8d43-b39ff4af8567" containerName="registry" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.747316 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4907a0ff-69b9-4d86-8d43-b39ff4af8567" containerName="registry" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.747891 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.752243 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.753239 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.753650 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.753996 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.764926 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.770344 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9"] Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.835288 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5t8\" (UniqueName: \"kubernetes.io/projected/1723537d-4733-43eb-b488-484029bd88a7-kube-api-access-bs5t8\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.835381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1723537d-4733-43eb-b488-484029bd88a7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.835454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1723537d-4733-43eb-b488-484029bd88a7-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.936687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1723537d-4733-43eb-b488-484029bd88a7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.936817 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1723537d-4733-43eb-b488-484029bd88a7-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.936891 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5t8\" (UniqueName: \"kubernetes.io/projected/1723537d-4733-43eb-b488-484029bd88a7-kube-api-access-bs5t8\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.938089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1723537d-4733-43eb-b488-484029bd88a7-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.946441 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1723537d-4733-43eb-b488-484029bd88a7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:15 crc kubenswrapper[4687]: I0312 16:09:15.966858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5t8\" (UniqueName: \"kubernetes.io/projected/1723537d-4733-43eb-b488-484029bd88a7-kube-api-access-bs5t8\") pod \"cluster-monitoring-operator-6d5b84845-pjbl9\" (UID: \"1723537d-4733-43eb-b488-484029bd88a7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:16 crc kubenswrapper[4687]: I0312 16:09:16.079608 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" Mar 12 16:09:16 crc kubenswrapper[4687]: I0312 16:09:16.570945 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9"] Mar 12 16:09:16 crc kubenswrapper[4687]: W0312 16:09:16.583496 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1723537d_4733_43eb_b488_484029bd88a7.slice/crio-f0e9fa66ed141deeafa013e7a1000c182752ebd81b1e8d68e66c2d0a0fa86190 WatchSource:0}: Error finding container f0e9fa66ed141deeafa013e7a1000c182752ebd81b1e8d68e66c2d0a0fa86190: Status 404 returned error can't find the container with id f0e9fa66ed141deeafa013e7a1000c182752ebd81b1e8d68e66c2d0a0fa86190 Mar 12 16:09:16 crc kubenswrapper[4687]: I0312 16:09:16.585730 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:09:16 crc kubenswrapper[4687]: I0312 16:09:16.840174 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" event={"ID":"1723537d-4733-43eb-b488-484029bd88a7","Type":"ContainerStarted","Data":"f0e9fa66ed141deeafa013e7a1000c182752ebd81b1e8d68e66c2d0a0fa86190"} Mar 12 16:09:18 crc kubenswrapper[4687]: I0312 16:09:18.854085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" event={"ID":"1723537d-4733-43eb-b488-484029bd88a7","Type":"ContainerStarted","Data":"1e509b0c86de3f81a9f3097faa4ea3962c6a036f33883e7ade47f3a0895283a5"} Mar 12 16:09:18 crc kubenswrapper[4687]: I0312 16:09:18.874605 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-pjbl9" podStartSLOduration=1.974430659 podStartE2EDuration="3.874551387s" podCreationTimestamp="2026-03-12 16:09:15 +0000 UTC" firstStartedPulling="2026-03-12 16:09:16.585548786 +0000 UTC m=+405.549511130" lastFinishedPulling="2026-03-12 16:09:18.485669514 +0000 UTC m=+407.449631858" observedRunningTime="2026-03-12 16:09:18.868706728 +0000 UTC m=+407.832669112" watchObservedRunningTime="2026-03-12 16:09:18.874551387 +0000 UTC m=+407.838513761" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.083136 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9"] Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.084001 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.085868 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.086463 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-h4zfd" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.092389 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9"] Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.180966 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/bc236d4c-8f96-413b-a300-ddb9d524fd23-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2xmr9\" (UID: \"bc236d4c-8f96-413b-a300-ddb9d524fd23\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.282089 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/bc236d4c-8f96-413b-a300-ddb9d524fd23-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2xmr9\" (UID: \"bc236d4c-8f96-413b-a300-ddb9d524fd23\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.288689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/bc236d4c-8f96-413b-a300-ddb9d524fd23-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-2xmr9\" (UID: \"bc236d4c-8f96-413b-a300-ddb9d524fd23\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.397283 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.825841 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9"] Mar 12 16:09:19 crc kubenswrapper[4687]: W0312 16:09:19.835347 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc236d4c_8f96_413b_a300_ddb9d524fd23.slice/crio-f7526c430eae14214d73a2d0b4c736e6648f14989b28508b3e0809d519886f22 WatchSource:0}: Error finding container f7526c430eae14214d73a2d0b4c736e6648f14989b28508b3e0809d519886f22: Status 404 returned error can't find the container with id f7526c430eae14214d73a2d0b4c736e6648f14989b28508b3e0809d519886f22 Mar 12 16:09:19 crc kubenswrapper[4687]: I0312 16:09:19.859791 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" event={"ID":"bc236d4c-8f96-413b-a300-ddb9d524fd23","Type":"ContainerStarted","Data":"f7526c430eae14214d73a2d0b4c736e6648f14989b28508b3e0809d519886f22"} Mar 12 16:09:21 crc kubenswrapper[4687]: I0312 16:09:21.870976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" event={"ID":"bc236d4c-8f96-413b-a300-ddb9d524fd23","Type":"ContainerStarted","Data":"aecc83bfbef9250e89d9d01e014f15ce22c049e694713f4cb1daa13138b3348e"} Mar 12 16:09:21 crc kubenswrapper[4687]: I0312 16:09:21.871350 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:21 crc kubenswrapper[4687]: I0312 16:09:21.876502 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 16:09:21 crc kubenswrapper[4687]: I0312 16:09:21.885131 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podStartSLOduration=1.6366471379999998 podStartE2EDuration="2.885112651s" podCreationTimestamp="2026-03-12 16:09:19 +0000 UTC" firstStartedPulling="2026-03-12 16:09:19.83720853 +0000 UTC m=+408.801170874" lastFinishedPulling="2026-03-12 16:09:21.085674043 +0000 UTC m=+410.049636387" observedRunningTime="2026-03-12 16:09:21.88321412 +0000 UTC m=+410.847176464" watchObservedRunningTime="2026-03-12 16:09:21.885112651 +0000 UTC m=+410.849074995" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.153612 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-q9c7m"] Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.154683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.157215 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.157432 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-mqzpf" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.157476 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.158019 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.171860 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-q9c7m"] Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.232995 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-metrics-client-ca\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.233047 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.233092 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwq5k\" (UniqueName: \"kubernetes.io/projected/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-kube-api-access-pwq5k\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.233160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.335208 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwq5k\" (UniqueName: \"kubernetes.io/projected/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-kube-api-access-pwq5k\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.335410 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.335497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-metrics-client-ca\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.335536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.336862 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-metrics-client-ca\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.342956 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.344308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.352124 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwq5k\" (UniqueName: \"kubernetes.io/projected/dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb-kube-api-access-pwq5k\") pod \"prometheus-operator-db54df47d-q9c7m\" (UID: \"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.470884 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.702518 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-q9c7m"] Mar 12 16:09:22 crc kubenswrapper[4687]: I0312 16:09:22.876307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" event={"ID":"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb","Type":"ContainerStarted","Data":"732c442269499c6bc3c24c9d601a97c6cd7abe734d31602f8ef47e9fc3c91380"} Mar 12 16:09:24 crc kubenswrapper[4687]: I0312 16:09:24.890943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" event={"ID":"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb","Type":"ContainerStarted","Data":"15a38d38cd7620e233ddf78819c813e86e382aa9899206874d48ba195e83cb66"} Mar 12 16:09:25 crc kubenswrapper[4687]: I0312 16:09:25.898909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" event={"ID":"dfd6c109-bee3-4bcd-be3f-6365ccbbe5cb","Type":"ContainerStarted","Data":"eb3fa5ea777afd80e686bb5dfa4eb459747573508651bff68cf4d090b583b943"} Mar 12 16:09:25 crc kubenswrapper[4687]: I0312 16:09:25.916863 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-q9c7m" podStartSLOduration=2.127885521 podStartE2EDuration="3.916846095s" podCreationTimestamp="2026-03-12 16:09:22 +0000 UTC" firstStartedPulling="2026-03-12 16:09:22.719509074 +0000 UTC m=+411.683471418" lastFinishedPulling="2026-03-12 16:09:24.508469658 +0000 UTC m=+413.472431992" observedRunningTime="2026-03-12 16:09:25.916277389 +0000 UTC m=+414.880239743" watchObservedRunningTime="2026-03-12 16:09:25.916846095 +0000 UTC m=+414.880808459" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.530565 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl"] Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.532524 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.534187 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ht22m" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.534537 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.538935 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.547525 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6"] Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.548823 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.551055 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-glf4r"] Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.551944 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.552445 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-s6bhk" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.552450 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.552497 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.552715 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.553087 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.555081 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-wjhkg" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.555264 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.576415 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6"] Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607746 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/531756a9-1fe7-44e7-9a27-db533fc4d7ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpls\" (UniqueName: \"kubernetes.io/projected/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-api-access-shpls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/531756a9-1fe7-44e7-9a27-db533fc4d7ad-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-textfile\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607840 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-wtmp\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607884 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607905 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/97d31624-a17b-4523-9f6a-0dd7f5f39920-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607919 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-tls\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607934 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/531756a9-1fe7-44e7-9a27-db533fc4d7ad-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607964 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-sys\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kn4m\" (UniqueName: \"kubernetes.io/projected/531756a9-1fe7-44e7-9a27-db533fc4d7ad-kube-api-access-8kn4m\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.607997 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvsqq\" (UniqueName: \"kubernetes.io/projected/49978701-8bcd-4806-820c-066ad8952a47-kube-api-access-cvsqq\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.608013 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49978701-8bcd-4806-820c-066ad8952a47-metrics-client-ca\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.608031 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.608054 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-root\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.608075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.608099 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97d31624-a17b-4523-9f6a-0dd7f5f39920-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.609954 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl"] Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.709831 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.709895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-root\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.709946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.709984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97d31624-a17b-4523-9f6a-0dd7f5f39920-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/531756a9-1fe7-44e7-9a27-db533fc4d7ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpls\" (UniqueName: \"kubernetes.io/projected/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-api-access-shpls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710044 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-root\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710068 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/531756a9-1fe7-44e7-9a27-db533fc4d7ad-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710135 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-textfile\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-wtmp\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710295 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/97d31624-a17b-4523-9f6a-0dd7f5f39920-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710336 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-tls\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710381 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/531756a9-1fe7-44e7-9a27-db533fc4d7ad-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710461 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-sys\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kn4m\" (UniqueName: \"kubernetes.io/projected/531756a9-1fe7-44e7-9a27-db533fc4d7ad-kube-api-access-8kn4m\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvsqq\" (UniqueName: \"kubernetes.io/projected/49978701-8bcd-4806-820c-066ad8952a47-kube-api-access-cvsqq\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.710556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49978701-8bcd-4806-820c-066ad8952a47-metrics-client-ca\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.711065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/531756a9-1fe7-44e7-9a27-db533fc4d7ad-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: E0312 16:09:27.711161 4687 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 12 16:09:27 crc kubenswrapper[4687]: E0312 16:09:27.711250 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-tls podName:97d31624-a17b-4523-9f6a-0dd7f5f39920 nodeName:}" failed. No retries permitted until 2026-03-12 16:09:28.211216747 +0000 UTC m=+417.175179191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-4j8b6" (UID: "97d31624-a17b-4523-9f6a-0dd7f5f39920") : secret "kube-state-metrics-tls" not found Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.711337 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49978701-8bcd-4806-820c-066ad8952a47-metrics-client-ca\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.711399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-sys\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.711396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-wtmp\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.711571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/97d31624-a17b-4523-9f6a-0dd7f5f39920-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.711710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-textfile\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.712154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/97d31624-a17b-4523-9f6a-0dd7f5f39920-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.712154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.717058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.719549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.719561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/49978701-8bcd-4806-820c-066ad8952a47-node-exporter-tls\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.721320 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/531756a9-1fe7-44e7-9a27-db533fc4d7ad-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.733608 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/531756a9-1fe7-44e7-9a27-db533fc4d7ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.744844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpls\" (UniqueName: \"kubernetes.io/projected/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-api-access-shpls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.744996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvsqq\" (UniqueName: \"kubernetes.io/projected/49978701-8bcd-4806-820c-066ad8952a47-kube-api-access-cvsqq\") pod \"node-exporter-glf4r\" (UID: \"49978701-8bcd-4806-820c-066ad8952a47\") " pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.746942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kn4m\" (UniqueName: \"kubernetes.io/projected/531756a9-1fe7-44e7-9a27-db533fc4d7ad-kube-api-access-8kn4m\") pod \"openshift-state-metrics-566fddb674-hwbsl\" (UID: \"531756a9-1fe7-44e7-9a27-db533fc4d7ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.852968 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" Mar 12 16:09:27 crc kubenswrapper[4687]: I0312 16:09:27.906639 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-glf4r" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.051532 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl"] Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.218877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.224200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/97d31624-a17b-4523-9f6a-0dd7f5f39920-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-4j8b6\" (UID: \"97d31624-a17b-4523-9f6a-0dd7f5f39920\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.481736 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.575856 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.577477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.582000 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.582174 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.588492 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.588672 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.591565 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.591726 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-4vcz2" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.591877 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.592035 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.593593 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4318a1f9-3690-453d-99c5-2aaeaae25da3-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629126 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629149 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4318a1f9-3690-453d-99c5-2aaeaae25da3-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-web-config\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4318a1f9-3690-453d-99c5-2aaeaae25da3-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629432 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjkr\" (UniqueName: \"kubernetes.io/projected/4318a1f9-3690-453d-99c5-2aaeaae25da3-kube-api-access-lkjkr\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629479 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4318a1f9-3690-453d-99c5-2aaeaae25da3-config-out\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629512 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629633 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629652 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4318a1f9-3690-453d-99c5-2aaeaae25da3-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.629716 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-config-volume\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.633437 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730501 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730524 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4318a1f9-3690-453d-99c5-2aaeaae25da3-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730551 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-config-volume\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4318a1f9-3690-453d-99c5-2aaeaae25da3-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730606 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4318a1f9-3690-453d-99c5-2aaeaae25da3-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-web-config\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4318a1f9-3690-453d-99c5-2aaeaae25da3-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkjkr\" (UniqueName: \"kubernetes.io/projected/4318a1f9-3690-453d-99c5-2aaeaae25da3-kube-api-access-lkjkr\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730732 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4318a1f9-3690-453d-99c5-2aaeaae25da3-config-out\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.730753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.731760 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4318a1f9-3690-453d-99c5-2aaeaae25da3-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.737041 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4318a1f9-3690-453d-99c5-2aaeaae25da3-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.737161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.737219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.739142 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4318a1f9-3690-453d-99c5-2aaeaae25da3-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.739499 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4318a1f9-3690-453d-99c5-2aaeaae25da3-config-out\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: E0312 16:09:28.739617 4687 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 12 16:09:28 crc kubenswrapper[4687]: E0312 16:09:28.739670 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-main-tls podName:4318a1f9-3690-453d-99c5-2aaeaae25da3 nodeName:}" failed. No retries permitted until 2026-03-12 16:09:29.239652125 +0000 UTC m=+418.203614579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "4318a1f9-3690-453d-99c5-2aaeaae25da3") : secret "alertmanager-main-tls" not found Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.740464 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.746992 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-web-config\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.748252 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4318a1f9-3690-453d-99c5-2aaeaae25da3-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.756092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-config-volume\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.766606 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkjkr\" (UniqueName: \"kubernetes.io/projected/4318a1f9-3690-453d-99c5-2aaeaae25da3-kube-api-access-lkjkr\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.914597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" event={"ID":"531756a9-1fe7-44e7-9a27-db533fc4d7ad","Type":"ContainerStarted","Data":"a5c87257855c6030078ec3b328d283bfa46fae83bde2ca5e30bbd1f5686b9ada"} Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.914638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" event={"ID":"531756a9-1fe7-44e7-9a27-db533fc4d7ad","Type":"ContainerStarted","Data":"95a0e8bfee1bb657d28e8ea4b59944552f4d4df9d00661c5d9f0f3ea3f620fb5"} Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.914648 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" event={"ID":"531756a9-1fe7-44e7-9a27-db533fc4d7ad","Type":"ContainerStarted","Data":"d9c9bfb6e0625367e8649a26d16df3013fe12d58e7de5cd31a9bf265aa9ec30a"} Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.915976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glf4r" event={"ID":"49978701-8bcd-4806-820c-066ad8952a47","Type":"ContainerStarted","Data":"2e3588449691aeaf4c066c04ee65feaedfff649def574b2bed48f1555d5e7ba9"} Mar 12 16:09:28 crc kubenswrapper[4687]: I0312 16:09:28.963288 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6"] Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.339732 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.344122 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4318a1f9-3690-453d-99c5-2aaeaae25da3-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4318a1f9-3690-453d-99c5-2aaeaae25da3\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:29 crc kubenswrapper[4687]: W0312 16:09:29.376879 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97d31624_a17b_4523_9f6a_0dd7f5f39920.slice/crio-9fdbf0239d9cf8ba97d49eb821da121d9e13b2898d8f8ba661068f2aa2c48940 WatchSource:0}: Error finding container 9fdbf0239d9cf8ba97d49eb821da121d9e13b2898d8f8ba661068f2aa2c48940: Status 404 returned error can't find the container with id 9fdbf0239d9cf8ba97d49eb821da121d9e13b2898d8f8ba661068f2aa2c48940 Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.485462 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z"] Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.487957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.489871 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-8bdks" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.490051 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.493181 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.493426 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.493571 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.493689 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.493864 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-chot5u0mrgdbt" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.496214 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z"] Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543183 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp6dn\" (UniqueName: \"kubernetes.io/projected/d33075ef-9184-4b45-9272-360a19902c6e-kube-api-access-gp6dn\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543258 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543282 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-tls\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d33075ef-9184-4b45-9272-360a19902c6e-metrics-client-ca\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.543653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-grpc-tls\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.550473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.644602 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.644645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-grpc-tls\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.644691 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.645410 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp6dn\" (UniqueName: \"kubernetes.io/projected/d33075ef-9184-4b45-9272-360a19902c6e-kube-api-access-gp6dn\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.646094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.646124 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.646149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-tls\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.646170 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d33075ef-9184-4b45-9272-360a19902c6e-metrics-client-ca\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.648204 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.648311 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-grpc-tls\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.648634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.650436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d33075ef-9184-4b45-9272-360a19902c6e-metrics-client-ca\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.650532 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-tls\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.652395 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.652508 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d33075ef-9184-4b45-9272-360a19902c6e-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.662818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp6dn\" (UniqueName: \"kubernetes.io/projected/d33075ef-9184-4b45-9272-360a19902c6e-kube-api-access-gp6dn\") pod \"thanos-querier-5d45cd5b67-f9k6z\" (UID: \"d33075ef-9184-4b45-9272-360a19902c6e\") " pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.809008 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.935408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" event={"ID":"97d31624-a17b-4523-9f6a-0dd7f5f39920","Type":"ContainerStarted","Data":"9fdbf0239d9cf8ba97d49eb821da121d9e13b2898d8f8ba661068f2aa2c48940"} Mar 12 16:09:29 crc kubenswrapper[4687]: I0312 16:09:29.958186 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 16:09:30 crc kubenswrapper[4687]: W0312 16:09:30.184107 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4318a1f9_3690_453d_99c5_2aaeaae25da3.slice/crio-27dff7ab87be5aaa1f5b7d55860b32a137905719bacb832ededfa672626809cd WatchSource:0}: Error finding container 27dff7ab87be5aaa1f5b7d55860b32a137905719bacb832ededfa672626809cd: Status 404 returned error can't find the container with id 27dff7ab87be5aaa1f5b7d55860b32a137905719bacb832ededfa672626809cd Mar 12 16:09:30 crc kubenswrapper[4687]: I0312 16:09:30.675091 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z"] Mar 12 16:09:30 crc kubenswrapper[4687]: I0312 16:09:30.942546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"27dff7ab87be5aaa1f5b7d55860b32a137905719bacb832ededfa672626809cd"} Mar 12 16:09:30 crc kubenswrapper[4687]: I0312 16:09:30.943865 4687 generic.go:334] "Generic (PLEG): container finished" podID="49978701-8bcd-4806-820c-066ad8952a47" containerID="88c2fb31fc2df46d3cb98f00226dc6e08e6017070f4390f3e608c2e65c6fc7fe" exitCode=0 Mar 12 16:09:30 crc kubenswrapper[4687]: I0312 16:09:30.943915 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glf4r" event={"ID":"49978701-8bcd-4806-820c-066ad8952a47","Type":"ContainerDied","Data":"88c2fb31fc2df46d3cb98f00226dc6e08e6017070f4390f3e608c2e65c6fc7fe"} Mar 12 16:09:31 crc kubenswrapper[4687]: I0312 16:09:31.949530 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"07a96f154d1fef243a7fe57f0076233d3df5e2e2af21c5f14abd8bcbf0421bdf"} Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.444838 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b5fd97b4f-mnlzd"] Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.446141 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.468858 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5fd97b4f-mnlzd"] Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-oauth-config\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587148 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-serving-cert\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-oauth-serving-cert\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587203 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-service-ca\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-trusted-ca-bundle\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587306 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-config\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.587322 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgscl\" (UniqueName: \"kubernetes.io/projected/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-kube-api-access-zgscl\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688668 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-config\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgscl\" (UniqueName: \"kubernetes.io/projected/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-kube-api-access-zgscl\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688764 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-oauth-config\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-serving-cert\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688822 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-oauth-serving-cert\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688856 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-service-ca\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.688901 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-trusted-ca-bundle\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.689702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-config\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.690081 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-trusted-ca-bundle\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.690337 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-service-ca\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.690394 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-oauth-serving-cert\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.696522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-serving-cert\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.696685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-oauth-config\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.708116 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgscl\" (UniqueName: \"kubernetes.io/projected/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-kube-api-access-zgscl\") pod \"console-6b5fd97b4f-mnlzd\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.762835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.823174 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5955fd9895-8btf6"] Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.824335 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.826521 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.827282 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-h85w5" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.827696 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-djt7ts1lqjfdl" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.827786 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.827835 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.832783 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5955fd9895-8btf6"] Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.834210 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/66bdc25f-19d1-4b63-83e6-ad246f6722e8-audit-log\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892249 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-client-ca-bundle\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892273 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/66bdc25f-19d1-4b63-83e6-ad246f6722e8-metrics-server-audit-profiles\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892301 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66bdc25f-19d1-4b63-83e6-ad246f6722e8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-secret-metrics-server-tls\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892382 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-secret-metrics-client-certs\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.892410 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bpc\" (UniqueName: \"kubernetes.io/projected/66bdc25f-19d1-4b63-83e6-ad246f6722e8-kube-api-access-m4bpc\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994220 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/66bdc25f-19d1-4b63-83e6-ad246f6722e8-metrics-server-audit-profiles\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994263 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-client-ca-bundle\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994283 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66bdc25f-19d1-4b63-83e6-ad246f6722e8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-secret-metrics-server-tls\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-secret-metrics-client-certs\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994357 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4bpc\" (UniqueName: \"kubernetes.io/projected/66bdc25f-19d1-4b63-83e6-ad246f6722e8-kube-api-access-m4bpc\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.994461 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/66bdc25f-19d1-4b63-83e6-ad246f6722e8-audit-log\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.995092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/66bdc25f-19d1-4b63-83e6-ad246f6722e8-audit-log\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.995812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66bdc25f-19d1-4b63-83e6-ad246f6722e8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.995818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/66bdc25f-19d1-4b63-83e6-ad246f6722e8-metrics-server-audit-profiles\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.997689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-secret-metrics-client-certs\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.997727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-secret-metrics-server-tls\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:32 crc kubenswrapper[4687]: I0312 16:09:32.997883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bdc25f-19d1-4b63-83e6-ad246f6722e8-client-ca-bundle\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.014055 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4bpc\" (UniqueName: \"kubernetes.io/projected/66bdc25f-19d1-4b63-83e6-ad246f6722e8-kube-api-access-m4bpc\") pod \"metrics-server-5955fd9895-8btf6\" (UID: \"66bdc25f-19d1-4b63-83e6-ad246f6722e8\") " pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.146197 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.322924 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l"] Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.323852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.326429 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.326458 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.329634 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l"] Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.400081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e3a243a-64b3-42a8-aa54-63ac6ba629c2-monitoring-plugin-cert\") pod \"monitoring-plugin-dd8c9f9fd-2l56l\" (UID: \"7e3a243a-64b3-42a8-aa54-63ac6ba629c2\") " pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.501442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e3a243a-64b3-42a8-aa54-63ac6ba629c2-monitoring-plugin-cert\") pod \"monitoring-plugin-dd8c9f9fd-2l56l\" (UID: \"7e3a243a-64b3-42a8-aa54-63ac6ba629c2\") " pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.504406 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7e3a243a-64b3-42a8-aa54-63ac6ba629c2-monitoring-plugin-cert\") pod \"monitoring-plugin-dd8c9f9fd-2l56l\" (UID: \"7e3a243a-64b3-42a8-aa54-63ac6ba629c2\") " pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.680119 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.747599 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.749939 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.752087 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.752697 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.754478 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.754889 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.755220 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cg0gvj368pflj" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.755503 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-vltvr" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.756638 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.757300 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.758472 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.759098 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.759285 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.763596 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.764967 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.786540 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjs4\" (UniqueName: \"kubernetes.io/projected/4db876a5-e8fd-4489-9ad8-2eb862247406-kube-api-access-tqjs4\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805523 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805559 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-config\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805593 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4db876a5-e8fd-4489-9ad8-2eb862247406-config-out\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805749 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-web-config\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4db876a5-e8fd-4489-9ad8-2eb862247406-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805807 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805839 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805870 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805889 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805912 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805934 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805959 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.805990 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907666 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqjs4\" (UniqueName: \"kubernetes.io/projected/4db876a5-e8fd-4489-9ad8-2eb862247406-kube-api-access-tqjs4\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907760 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-config\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907876 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907901 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4db876a5-e8fd-4489-9ad8-2eb862247406-config-out\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907970 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-web-config\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.907994 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4db876a5-e8fd-4489-9ad8-2eb862247406-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908052 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908198 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.908850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.909131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.910301 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.911264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.914046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.914546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.915975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.916070 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.916104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.919308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.919335 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4db876a5-e8fd-4489-9ad8-2eb862247406-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.924010 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.924474 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4db876a5-e8fd-4489-9ad8-2eb862247406-config-out\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.924939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqjs4\" (UniqueName: \"kubernetes.io/projected/4db876a5-e8fd-4489-9ad8-2eb862247406-kube-api-access-tqjs4\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.927905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-config\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.928667 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.928683 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4db876a5-e8fd-4489-9ad8-2eb862247406-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:33 crc kubenswrapper[4687]: I0312 16:09:33.948762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4db876a5-e8fd-4489-9ad8-2eb862247406-web-config\") pod \"prometheus-k8s-0\" (UID: \"4db876a5-e8fd-4489-9ad8-2eb862247406\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.043643 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l"] Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.136003 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.184192 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5955fd9895-8btf6"] Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.282608 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5fd97b4f-mnlzd"] Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.627314 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 16:09:34 crc kubenswrapper[4687]: W0312 16:09:34.638801 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4db876a5_e8fd_4489_9ad8_2eb862247406.slice/crio-ac55f03b11cfdae5b854f23cc79d81025b8e438c9993fee60ef4f2ef2d8ea3f6 WatchSource:0}: Error finding container ac55f03b11cfdae5b854f23cc79d81025b8e438c9993fee60ef4f2ef2d8ea3f6: Status 404 returned error can't find the container with id ac55f03b11cfdae5b854f23cc79d81025b8e438c9993fee60ef4f2ef2d8ea3f6 Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.964895 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"ac55f03b11cfdae5b854f23cc79d81025b8e438c9993fee60ef4f2ef2d8ea3f6"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.967564 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glf4r" event={"ID":"49978701-8bcd-4806-820c-066ad8952a47","Type":"ContainerStarted","Data":"7546d5382322843a1bb9ddad468e61b8aff063f0e64f04309dab47d6e960d1cf"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.967629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-glf4r" event={"ID":"49978701-8bcd-4806-820c-066ad8952a47","Type":"ContainerStarted","Data":"d989f87f83c3756669708965f76c6f45461197eea41ccaf95cb44056d23b4177"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.968486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" event={"ID":"7e3a243a-64b3-42a8-aa54-63ac6ba629c2","Type":"ContainerStarted","Data":"efda3b1203cdc09fbe3155a2458e067b04dded4019d0afeac4f03854e639b39a"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.970713 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" event={"ID":"531756a9-1fe7-44e7-9a27-db533fc4d7ad","Type":"ContainerStarted","Data":"c1b220ea1b3bac5328c0056d2a0784f77a76d4c36a0d6e7250c96524d277d074"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.971602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" event={"ID":"66bdc25f-19d1-4b63-83e6-ad246f6722e8","Type":"ContainerStarted","Data":"f872921ce7b245a95eea3e434d778c7098972041965bf2bf6acb93fc94c4b5c5"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.973949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5fd97b4f-mnlzd" event={"ID":"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0","Type":"ContainerStarted","Data":"79ad9f8ecfd11222a2a22ac2a3368d080599c90b616318ff6ca1a8a5729d77bb"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.973975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5fd97b4f-mnlzd" event={"ID":"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0","Type":"ContainerStarted","Data":"cda32bff6a750286acc9e2d98691751b524786688c539957de47a28b21ce7671"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.976031 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" event={"ID":"97d31624-a17b-4523-9f6a-0dd7f5f39920","Type":"ContainerStarted","Data":"44d6dd52d8a21dfef054457c77baea2c0cf089892e595a9df92422449601fd3b"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.976079 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" event={"ID":"97d31624-a17b-4523-9f6a-0dd7f5f39920","Type":"ContainerStarted","Data":"757ff8b69575c7f1b0be229b1bbda6337266711b90a84c5e2ef14cbb6f8582e8"} Mar 12 16:09:34 crc kubenswrapper[4687]: I0312 16:09:34.992287 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-glf4r" podStartSLOduration=5.71208028 podStartE2EDuration="7.99226699s" podCreationTimestamp="2026-03-12 16:09:27 +0000 UTC" firstStartedPulling="2026-03-12 16:09:27.940513905 +0000 UTC m=+416.904476259" lastFinishedPulling="2026-03-12 16:09:30.220700625 +0000 UTC m=+419.184662969" observedRunningTime="2026-03-12 16:09:34.988211179 +0000 UTC m=+423.952173533" watchObservedRunningTime="2026-03-12 16:09:34.99226699 +0000 UTC m=+423.956229334" Mar 12 16:09:35 crc kubenswrapper[4687]: I0312 16:09:35.007780 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b5fd97b4f-mnlzd" podStartSLOduration=3.007763713 podStartE2EDuration="3.007763713s" podCreationTimestamp="2026-03-12 16:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:09:35.005824271 +0000 UTC m=+423.969786615" watchObservedRunningTime="2026-03-12 16:09:35.007763713 +0000 UTC m=+423.971726057" Mar 12 16:09:35 crc kubenswrapper[4687]: I0312 16:09:35.023535 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-hwbsl" podStartSLOduration=2.8334032049999998 podStartE2EDuration="8.023520304s" podCreationTimestamp="2026-03-12 16:09:27 +0000 UTC" firstStartedPulling="2026-03-12 16:09:28.558640674 +0000 UTC m=+417.522603058" lastFinishedPulling="2026-03-12 16:09:33.748757813 +0000 UTC m=+422.712720157" observedRunningTime="2026-03-12 16:09:35.020413428 +0000 UTC m=+423.984375772" watchObservedRunningTime="2026-03-12 16:09:35.023520304 +0000 UTC m=+423.987482648" Mar 12 16:09:35 crc kubenswrapper[4687]: I0312 16:09:35.983739 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" event={"ID":"97d31624-a17b-4523-9f6a-0dd7f5f39920","Type":"ContainerStarted","Data":"b86a03477b8a03b235e9609b668d687af0a3b6538e5d6ece6707011af1c606a5"} Mar 12 16:09:36 crc kubenswrapper[4687]: I0312 16:09:36.007544 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-4j8b6" podStartSLOduration=4.216482532 podStartE2EDuration="9.007526849s" podCreationTimestamp="2026-03-12 16:09:27 +0000 UTC" firstStartedPulling="2026-03-12 16:09:29.378654334 +0000 UTC m=+418.342616668" lastFinishedPulling="2026-03-12 16:09:34.169698641 +0000 UTC m=+423.133660985" observedRunningTime="2026-03-12 16:09:36.005169535 +0000 UTC m=+424.969131899" watchObservedRunningTime="2026-03-12 16:09:36.007526849 +0000 UTC m=+424.971489193" Mar 12 16:09:40 crc kubenswrapper[4687]: I0312 16:09:40.012576 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" event={"ID":"66bdc25f-19d1-4b63-83e6-ad246f6722e8","Type":"ContainerStarted","Data":"90fb3737b76133d7293c829b2e35edef4ace11e5b9f9e88e9723dc1cd643ef71"} Mar 12 16:09:40 crc kubenswrapper[4687]: I0312 16:09:40.014546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"9bfff1d162bdae696438f4f78b3a540e1f5d6bdb95d40b65bfca7c011d071d36"} Mar 12 16:09:40 crc kubenswrapper[4687]: I0312 16:09:40.015707 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"b4d226143281c651f515883763b21374f66e32865deabbb7647f62bb96b1a2d8"} Mar 12 16:09:40 crc kubenswrapper[4687]: I0312 16:09:40.017753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"98b00dd6bc5d5e2c6ec69d2a1c9eae000d8562501ca417507290e7aad161a28b"} Mar 12 16:09:40 crc kubenswrapper[4687]: I0312 16:09:40.019674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" event={"ID":"7e3a243a-64b3-42a8-aa54-63ac6ba629c2","Type":"ContainerStarted","Data":"8da4f095821392812201332fc57453b066abaca1e0e4114b859f2f53c4c15141"} Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.028398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"2c5bccc13c7ef0b75a0ad8948c6111c11e72c3ed5ad8d978e694c99644091af5"} Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.028441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"1cba2abc9dbe09f17cd8a278502313ba538bc2edc006c9e6dfdfcb514f464f84"} Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.030293 4687 generic.go:334] "Generic (PLEG): container finished" podID="4318a1f9-3690-453d-99c5-2aaeaae25da3" containerID="98b00dd6bc5d5e2c6ec69d2a1c9eae000d8562501ca417507290e7aad161a28b" exitCode=0 Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.030352 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerDied","Data":"98b00dd6bc5d5e2c6ec69d2a1c9eae000d8562501ca417507290e7aad161a28b"} Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.031994 4687 generic.go:334] "Generic (PLEG): container finished" podID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerID="b4d226143281c651f515883763b21374f66e32865deabbb7647f62bb96b1a2d8" exitCode=0 Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.032039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerDied","Data":"b4d226143281c651f515883763b21374f66e32865deabbb7647f62bb96b1a2d8"} Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.032649 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.037848 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.108202 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podStartSLOduration=3.6690056699999998 podStartE2EDuration="9.108184145s" podCreationTimestamp="2026-03-12 16:09:32 +0000 UTC" firstStartedPulling="2026-03-12 16:09:34.194849818 +0000 UTC m=+423.158812162" lastFinishedPulling="2026-03-12 16:09:39.634028243 +0000 UTC m=+428.597990637" observedRunningTime="2026-03-12 16:09:41.102425478 +0000 UTC m=+430.066387822" watchObservedRunningTime="2026-03-12 16:09:41.108184145 +0000 UTC m=+430.072146489" Mar 12 16:09:41 crc kubenswrapper[4687]: I0312 16:09:41.119478 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" podStartSLOduration=2.63017032 podStartE2EDuration="8.119461493s" podCreationTimestamp="2026-03-12 16:09:33 +0000 UTC" firstStartedPulling="2026-03-12 16:09:34.142947431 +0000 UTC m=+423.106909775" lastFinishedPulling="2026-03-12 16:09:39.632238604 +0000 UTC m=+428.596200948" observedRunningTime="2026-03-12 16:09:41.11641868 +0000 UTC m=+430.080381034" watchObservedRunningTime="2026-03-12 16:09:41.119461493 +0000 UTC m=+430.083423837" Mar 12 16:09:42 crc kubenswrapper[4687]: I0312 16:09:42.040968 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"62c758ecb38e422960c3740404f9538170a1dad64c903b1f0b2fc1c525c913cc"} Mar 12 16:09:42 crc kubenswrapper[4687]: I0312 16:09:42.041323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"189ca10d9e37c9c6d95d31965af6e2f890d53d9d7264cc72007d7e1610124ff8"} Mar 12 16:09:42 crc kubenswrapper[4687]: I0312 16:09:42.041343 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" event={"ID":"d33075ef-9184-4b45-9272-360a19902c6e","Type":"ContainerStarted","Data":"7d9219d1b8fe8dc5cab3be6be26cd53e6c85b1d21d39e706805d1fd42f361442"} Mar 12 16:09:42 crc kubenswrapper[4687]: I0312 16:09:42.041440 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:42 crc kubenswrapper[4687]: I0312 16:09:42.075260 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" podStartSLOduration=2.914605885 podStartE2EDuration="13.075226288s" podCreationTimestamp="2026-03-12 16:09:29 +0000 UTC" firstStartedPulling="2026-03-12 16:09:31.266004074 +0000 UTC m=+420.229966418" lastFinishedPulling="2026-03-12 16:09:41.426624477 +0000 UTC m=+430.390586821" observedRunningTime="2026-03-12 16:09:42.074403595 +0000 UTC m=+431.038366019" watchObservedRunningTime="2026-03-12 16:09:42.075226288 +0000 UTC m=+431.039188632" Mar 12 16:09:43 crc kubenswrapper[4687]: I0312 16:09:43.330442 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:43 crc kubenswrapper[4687]: I0312 16:09:43.330484 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:43 crc kubenswrapper[4687]: I0312 16:09:43.344630 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.121533 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.121933 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.350207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"d28a36481bd5a30a79b61e6454a57137e5b04776a02dc7976541765edec5c8ba"} Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.350252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"3a8bd175ac071f6db664ca40646431982a193569ff682e88ff3b284a70089fd8"} Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.350265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"575253548ffceb66e7cd9122972a0576dce27c89aa24320a5821fe2dcce4264d"} Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.350276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"3d3ccd4b64800afe1eb83ed53fe8945ad28ffd9d5f0598fdeaeb565276c6ec08"} Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.353641 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:09:44 crc kubenswrapper[4687]: I0312 16:09:44.401244 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l4j4z"] Mar 12 16:09:45 crc kubenswrapper[4687]: I0312 16:09:45.359717 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"acc3e1bf74e87886531c147c9c70af6394ec4904cdd3929de950fd3a51a12d1a"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.372813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4318a1f9-3690-453d-99c5-2aaeaae25da3","Type":"ContainerStarted","Data":"736e8657c653ce1d9f4cb9a6b88e9e2b321121a72545389ed048805c7ddd3a79"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.380406 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"34d88a7b478129b4aa5cb171f8e7b121b7042272102705bec04404e037715b7f"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.380459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"35b15cf2d2f8f257b0b91f8eb929d9ebbadc18e11a209027061c021f6a15b29d"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.380471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"3086a03cc0030499693c16be0d4e13aed3b7f04004ab6133d50c03c251b37e92"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.380480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"869ed7bf3c0b138ef0732150a5d3974224bd154e4e9b3fcaed0e8ca765fe5fa6"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.380490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"05e3086c70b1fa755ba3160a2317882dd5dfa1859beaa9ee33d436010122b115"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.380499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4db876a5-e8fd-4489-9ad8-2eb862247406","Type":"ContainerStarted","Data":"4f12468f6aa277dcec988d311d5862c58db22b28327799a4c7b58e857f63029a"} Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.423099 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=5.117467298 podStartE2EDuration="18.423084639s" podCreationTimestamp="2026-03-12 16:09:28 +0000 UTC" firstStartedPulling="2026-03-12 16:09:30.212280605 +0000 UTC m=+419.176242949" lastFinishedPulling="2026-03-12 16:09:43.517897946 +0000 UTC m=+432.481860290" observedRunningTime="2026-03-12 16:09:46.41981813 +0000 UTC m=+435.383780494" watchObservedRunningTime="2026-03-12 16:09:46.423084639 +0000 UTC m=+435.387046983" Mar 12 16:09:46 crc kubenswrapper[4687]: I0312 16:09:46.466096 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.910098436 podStartE2EDuration="13.466082593s" podCreationTimestamp="2026-03-12 16:09:33 +0000 UTC" firstStartedPulling="2026-03-12 16:09:34.646294498 +0000 UTC m=+423.610256842" lastFinishedPulling="2026-03-12 16:09:45.202278655 +0000 UTC m=+434.166240999" observedRunningTime="2026-03-12 16:09:46.464031668 +0000 UTC m=+435.427994012" watchObservedRunningTime="2026-03-12 16:09:46.466082593 +0000 UTC m=+435.430044937" Mar 12 16:09:49 crc kubenswrapper[4687]: I0312 16:09:49.136915 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:09:49 crc kubenswrapper[4687]: I0312 16:09:49.821479 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" Mar 12 16:09:53 crc kubenswrapper[4687]: I0312 16:09:53.147128 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:09:53 crc kubenswrapper[4687]: I0312 16:09:53.147451 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.129861 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555530-sp779"] Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.131408 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.136615 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.137068 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.137321 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.143269 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555530-sp779"] Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.276937 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpr6d\" (UniqueName: \"kubernetes.io/projected/f118be6d-858e-413e-a006-922ea753d28c-kube-api-access-hpr6d\") pod \"auto-csr-approver-29555530-sp779\" (UID: \"f118be6d-858e-413e-a006-922ea753d28c\") " pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.378927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpr6d\" (UniqueName: \"kubernetes.io/projected/f118be6d-858e-413e-a006-922ea753d28c-kube-api-access-hpr6d\") pod \"auto-csr-approver-29555530-sp779\" (UID: \"f118be6d-858e-413e-a006-922ea753d28c\") " pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.413702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpr6d\" (UniqueName: \"kubernetes.io/projected/f118be6d-858e-413e-a006-922ea753d28c-kube-api-access-hpr6d\") pod \"auto-csr-approver-29555530-sp779\" (UID: \"f118be6d-858e-413e-a006-922ea753d28c\") " pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.455662 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:00 crc kubenswrapper[4687]: I0312 16:10:00.955329 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555530-sp779"] Mar 12 16:10:01 crc kubenswrapper[4687]: I0312 16:10:01.489747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555530-sp779" event={"ID":"f118be6d-858e-413e-a006-922ea753d28c","Type":"ContainerStarted","Data":"f008b74836cba8b30870bdd5235811b5033a7d9e12601fd5c3a53bcd08f088ef"} Mar 12 16:10:02 crc kubenswrapper[4687]: I0312 16:10:02.496115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555530-sp779" event={"ID":"f118be6d-858e-413e-a006-922ea753d28c","Type":"ContainerStarted","Data":"ca14ec39b05931bfc595271691001f874b9f86c96b87e8d97b87bdd24947f54d"} Mar 12 16:10:02 crc kubenswrapper[4687]: I0312 16:10:02.515253 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555530-sp779" podStartSLOduration=1.3851109830000001 podStartE2EDuration="2.515221495s" podCreationTimestamp="2026-03-12 16:10:00 +0000 UTC" firstStartedPulling="2026-03-12 16:10:00.962605962 +0000 UTC m=+449.926568306" lastFinishedPulling="2026-03-12 16:10:02.092716434 +0000 UTC m=+451.056678818" observedRunningTime="2026-03-12 16:10:02.513397245 +0000 UTC m=+451.477359629" watchObservedRunningTime="2026-03-12 16:10:02.515221495 +0000 UTC m=+451.479183869" Mar 12 16:10:03 crc kubenswrapper[4687]: I0312 16:10:03.502825 4687 generic.go:334] "Generic (PLEG): container finished" podID="f118be6d-858e-413e-a006-922ea753d28c" containerID="ca14ec39b05931bfc595271691001f874b9f86c96b87e8d97b87bdd24947f54d" exitCode=0 Mar 12 16:10:03 crc kubenswrapper[4687]: I0312 16:10:03.502902 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555530-sp779" event={"ID":"f118be6d-858e-413e-a006-922ea753d28c","Type":"ContainerDied","Data":"ca14ec39b05931bfc595271691001f874b9f86c96b87e8d97b87bdd24947f54d"} Mar 12 16:10:04 crc kubenswrapper[4687]: I0312 16:10:04.808662 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:04 crc kubenswrapper[4687]: I0312 16:10:04.981292 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpr6d\" (UniqueName: \"kubernetes.io/projected/f118be6d-858e-413e-a006-922ea753d28c-kube-api-access-hpr6d\") pod \"f118be6d-858e-413e-a006-922ea753d28c\" (UID: \"f118be6d-858e-413e-a006-922ea753d28c\") " Mar 12 16:10:04 crc kubenswrapper[4687]: I0312 16:10:04.988952 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f118be6d-858e-413e-a006-922ea753d28c-kube-api-access-hpr6d" (OuterVolumeSpecName: "kube-api-access-hpr6d") pod "f118be6d-858e-413e-a006-922ea753d28c" (UID: "f118be6d-858e-413e-a006-922ea753d28c"). InnerVolumeSpecName "kube-api-access-hpr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.082893 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpr6d\" (UniqueName: \"kubernetes.io/projected/f118be6d-858e-413e-a006-922ea753d28c-kube-api-access-hpr6d\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.104652 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555524-2x297"] Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.110425 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555524-2x297"] Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.520288 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555530-sp779" event={"ID":"f118be6d-858e-413e-a006-922ea753d28c","Type":"ContainerDied","Data":"f008b74836cba8b30870bdd5235811b5033a7d9e12601fd5c3a53bcd08f088ef"} Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.520603 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f008b74836cba8b30870bdd5235811b5033a7d9e12601fd5c3a53bcd08f088ef" Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.520380 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555530-sp779" Mar 12 16:10:05 crc kubenswrapper[4687]: I0312 16:10:05.742600 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5ffe1b-8f89-43f0-95fb-8e3823891f2b" path="/var/lib/kubelet/pods/dd5ffe1b-8f89-43f0-95fb-8e3823891f2b/volumes" Mar 12 16:10:09 crc kubenswrapper[4687]: I0312 16:10:09.438603 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-l4j4z" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" containerName="console" containerID="cri-o://98478bd56cae521d0f23e55870a23b95030bd0bc00280da5e95baeea2c10cea8" gracePeriod=15 Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.252003 4687 patch_prober.go:28] interesting pod/console-f9d7485db-l4j4z container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.252352 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-l4j4z" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.551187 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l4j4z_a123cd77-f222-4d99-b77f-f11c6c323005/console/0.log" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.551475 4687 generic.go:334] "Generic (PLEG): container finished" podID="a123cd77-f222-4d99-b77f-f11c6c323005" containerID="98478bd56cae521d0f23e55870a23b95030bd0bc00280da5e95baeea2c10cea8" exitCode=2 Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.551500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4j4z" event={"ID":"a123cd77-f222-4d99-b77f-f11c6c323005","Type":"ContainerDied","Data":"98478bd56cae521d0f23e55870a23b95030bd0bc00280da5e95baeea2c10cea8"} Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.551523 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4j4z" event={"ID":"a123cd77-f222-4d99-b77f-f11c6c323005","Type":"ContainerDied","Data":"bc3c9ba518809a99fb890d46b219b9c4fa49ffc34509418f8a16d9d875674fc4"} Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.551537 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3c9ba518809a99fb890d46b219b9c4fa49ffc34509418f8a16d9d875674fc4" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.592758 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l4j4z_a123cd77-f222-4d99-b77f-f11c6c323005/console/0.log" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.592818 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762183 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-trusted-ca-bundle\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762301 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8gfd\" (UniqueName: \"kubernetes.io/projected/a123cd77-f222-4d99-b77f-f11c6c323005-kube-api-access-b8gfd\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762412 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-serving-cert\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762452 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-service-ca\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762482 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-console-config\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762506 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-oauth-config\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.762557 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-oauth-serving-cert\") pod \"a123cd77-f222-4d99-b77f-f11c6c323005\" (UID: \"a123cd77-f222-4d99-b77f-f11c6c323005\") " Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.763134 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.763195 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.763233 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-service-ca" (OuterVolumeSpecName: "service-ca") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.763429 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-console-config" (OuterVolumeSpecName: "console-config") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.768868 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a123cd77-f222-4d99-b77f-f11c6c323005-kube-api-access-b8gfd" (OuterVolumeSpecName: "kube-api-access-b8gfd") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "kube-api-access-b8gfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.769265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.769680 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a123cd77-f222-4d99-b77f-f11c6c323005" (UID: "a123cd77-f222-4d99-b77f-f11c6c323005"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864165 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864209 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8gfd\" (UniqueName: \"kubernetes.io/projected/a123cd77-f222-4d99-b77f-f11c6c323005-kube-api-access-b8gfd\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864222 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864234 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864245 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864254 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a123cd77-f222-4d99-b77f-f11c6c323005-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:10 crc kubenswrapper[4687]: I0312 16:10:10.864265 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a123cd77-f222-4d99-b77f-f11c6c323005-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:10:11 crc kubenswrapper[4687]: I0312 16:10:11.557601 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4j4z" Mar 12 16:10:11 crc kubenswrapper[4687]: I0312 16:10:11.594864 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l4j4z"] Mar 12 16:10:11 crc kubenswrapper[4687]: I0312 16:10:11.602202 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-l4j4z"] Mar 12 16:10:11 crc kubenswrapper[4687]: I0312 16:10:11.753436 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" path="/var/lib/kubelet/pods/a123cd77-f222-4d99-b77f-f11c6c323005/volumes" Mar 12 16:10:13 crc kubenswrapper[4687]: I0312 16:10:13.155688 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:10:13 crc kubenswrapper[4687]: I0312 16:10:13.160299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.123457 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.123798 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.123856 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.124658 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca9af5a60d7edd2ac85d52bd0521eb3c6c568aec927cc166905e24d8ea4a0e8a"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.124750 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://ca9af5a60d7edd2ac85d52bd0521eb3c6c568aec927cc166905e24d8ea4a0e8a" gracePeriod=600 Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.575916 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="ca9af5a60d7edd2ac85d52bd0521eb3c6c568aec927cc166905e24d8ea4a0e8a" exitCode=0 Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.576247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"ca9af5a60d7edd2ac85d52bd0521eb3c6c568aec927cc166905e24d8ea4a0e8a"} Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.576274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"b9433415f064106c8d16d6be7a69dee0d049ec5465c0e65f1afbe31077aa75da"} Mar 12 16:10:14 crc kubenswrapper[4687]: I0312 16:10:14.576290 4687 scope.go:117] "RemoveContainer" containerID="b933090f380a899e933d4c45b47e0961f369f43f1a17634844a54ddbefc5ca34" Mar 12 16:10:34 crc kubenswrapper[4687]: I0312 16:10:34.137070 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:10:34 crc kubenswrapper[4687]: I0312 16:10:34.187557 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:10:34 crc kubenswrapper[4687]: I0312 16:10:34.773927 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 16:10:56 crc kubenswrapper[4687]: I0312 16:10:56.850855 4687 scope.go:117] "RemoveContainer" containerID="98478bd56cae521d0f23e55870a23b95030bd0bc00280da5e95baeea2c10cea8" Mar 12 16:10:56 crc kubenswrapper[4687]: I0312 16:10:56.883186 4687 scope.go:117] "RemoveContainer" containerID="99db254250f235713504aadb1e775b5bb97b6a799d764ce225e2ba95cee1e841" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.498803 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5fcb6d6857-pdg8m"] Mar 12 16:11:23 crc kubenswrapper[4687]: E0312 16:11:23.499624 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" containerName="console" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.499641 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" containerName="console" Mar 12 16:11:23 crc kubenswrapper[4687]: E0312 16:11:23.499664 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f118be6d-858e-413e-a006-922ea753d28c" containerName="oc" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.499673 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f118be6d-858e-413e-a006-922ea753d28c" containerName="oc" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.499925 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f118be6d-858e-413e-a006-922ea753d28c" containerName="oc" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.499942 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a123cd77-f222-4d99-b77f-f11c6c323005" containerName="console" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.500795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.510928 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5fcb6d6857-pdg8m"] Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644112 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-trusted-ca-bundle\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-oauth-serving-cert\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-serving-cert\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644216 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-config\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644249 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-oauth-config\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-service-ca\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.644377 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jkg\" (UniqueName: \"kubernetes.io/projected/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-kube-api-access-22jkg\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.745729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-serving-cert\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.745795 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-config\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.745848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-oauth-config\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.745899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-service-ca\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.745932 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22jkg\" (UniqueName: \"kubernetes.io/projected/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-kube-api-access-22jkg\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.746076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-trusted-ca-bundle\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.746109 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-oauth-serving-cert\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.747631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-oauth-serving-cert\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.747802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-service-ca\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.748114 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-config\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.748556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-trusted-ca-bundle\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.752587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-oauth-config\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.753808 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-serving-cert\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.780645 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22jkg\" (UniqueName: \"kubernetes.io/projected/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-kube-api-access-22jkg\") pod \"console-5fcb6d6857-pdg8m\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:23 crc kubenswrapper[4687]: I0312 16:11:23.836374 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:24 crc kubenswrapper[4687]: I0312 16:11:24.257296 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5fcb6d6857-pdg8m"] Mar 12 16:11:25 crc kubenswrapper[4687]: I0312 16:11:25.105272 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fcb6d6857-pdg8m" event={"ID":"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e","Type":"ContainerStarted","Data":"cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d"} Mar 12 16:11:25 crc kubenswrapper[4687]: I0312 16:11:25.105630 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fcb6d6857-pdg8m" event={"ID":"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e","Type":"ContainerStarted","Data":"80f58c21411916dc86cbb8960759f504314fa10b733fed8b18d2012ee361a68a"} Mar 12 16:11:25 crc kubenswrapper[4687]: I0312 16:11:25.137643 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5fcb6d6857-pdg8m" podStartSLOduration=2.137617431 podStartE2EDuration="2.137617431s" podCreationTimestamp="2026-03-12 16:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:11:25.134991202 +0000 UTC m=+534.098953596" watchObservedRunningTime="2026-03-12 16:11:25.137617431 +0000 UTC m=+534.101579815" Mar 12 16:11:33 crc kubenswrapper[4687]: I0312 16:11:33.837030 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:33 crc kubenswrapper[4687]: I0312 16:11:33.837709 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:33 crc kubenswrapper[4687]: I0312 16:11:33.844107 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:34 crc kubenswrapper[4687]: I0312 16:11:34.187073 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:11:34 crc kubenswrapper[4687]: I0312 16:11:34.238900 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b5fd97b4f-mnlzd"] Mar 12 16:11:56 crc kubenswrapper[4687]: I0312 16:11:56.944003 4687 scope.go:117] "RemoveContainer" containerID="f70e2e2ceb248ada59de7f90e434e97c7ba90306ab1f202b6e4b1d5cb02b724a" Mar 12 16:11:59 crc kubenswrapper[4687]: I0312 16:11:59.278289 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b5fd97b4f-mnlzd" podUID="1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" containerName="console" containerID="cri-o://79ad9f8ecfd11222a2a22ac2a3368d080599c90b616318ff6ca1a8a5729d77bb" gracePeriod=15 Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.158207 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555532-dxrn2"] Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.160435 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.163288 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.163516 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.165587 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.169487 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555532-dxrn2"] Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.221277 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76gf5\" (UniqueName: \"kubernetes.io/projected/c209e8d9-555d-4362-a44a-0279015e41f1-kube-api-access-76gf5\") pod \"auto-csr-approver-29555532-dxrn2\" (UID: \"c209e8d9-555d-4362-a44a-0279015e41f1\") " pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.323692 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76gf5\" (UniqueName: \"kubernetes.io/projected/c209e8d9-555d-4362-a44a-0279015e41f1-kube-api-access-76gf5\") pod \"auto-csr-approver-29555532-dxrn2\" (UID: \"c209e8d9-555d-4362-a44a-0279015e41f1\") " pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.362822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gf5\" (UniqueName: \"kubernetes.io/projected/c209e8d9-555d-4362-a44a-0279015e41f1-kube-api-access-76gf5\") pod \"auto-csr-approver-29555532-dxrn2\" (UID: \"c209e8d9-555d-4362-a44a-0279015e41f1\") " pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.383597 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b5fd97b4f-mnlzd_1ad8036e-ef57-4f87-98fb-afeb9ff0bea0/console/0.log" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.383942 4687 generic.go:334] "Generic (PLEG): container finished" podID="1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" containerID="79ad9f8ecfd11222a2a22ac2a3368d080599c90b616318ff6ca1a8a5729d77bb" exitCode=2 Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.383975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5fd97b4f-mnlzd" event={"ID":"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0","Type":"ContainerDied","Data":"79ad9f8ecfd11222a2a22ac2a3368d080599c90b616318ff6ca1a8a5729d77bb"} Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.488890 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.557765 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b5fd97b4f-mnlzd_1ad8036e-ef57-4f87-98fb-afeb9ff0bea0/console/0.log" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.557897 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628288 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-trusted-ca-bundle\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628339 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-service-ca\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-config\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628417 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgscl\" (UniqueName: \"kubernetes.io/projected/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-kube-api-access-zgscl\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628454 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-serving-cert\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628528 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-oauth-config\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.628568 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-oauth-serving-cert\") pod \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\" (UID: \"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0\") " Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.629318 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-config" (OuterVolumeSpecName: "console-config") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.629342 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.629408 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-service-ca" (OuterVolumeSpecName: "service-ca") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.629744 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.633664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.633709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.638655 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-kube-api-access-zgscl" (OuterVolumeSpecName: "kube-api-access-zgscl") pod "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" (UID: "1ad8036e-ef57-4f87-98fb-afeb9ff0bea0"). InnerVolumeSpecName "kube-api-access-zgscl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.729931 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.729979 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.729997 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.730014 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgscl\" (UniqueName: \"kubernetes.io/projected/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-kube-api-access-zgscl\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.730031 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.730043 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.730054 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:00 crc kubenswrapper[4687]: I0312 16:12:00.919431 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555532-dxrn2"] Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.395898 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b5fd97b4f-mnlzd_1ad8036e-ef57-4f87-98fb-afeb9ff0bea0/console/0.log" Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.396071 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5fd97b4f-mnlzd" Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.396143 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5fd97b4f-mnlzd" event={"ID":"1ad8036e-ef57-4f87-98fb-afeb9ff0bea0","Type":"ContainerDied","Data":"cda32bff6a750286acc9e2d98691751b524786688c539957de47a28b21ce7671"} Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.396258 4687 scope.go:117] "RemoveContainer" containerID="79ad9f8ecfd11222a2a22ac2a3368d080599c90b616318ff6ca1a8a5729d77bb" Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.398139 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" event={"ID":"c209e8d9-555d-4362-a44a-0279015e41f1","Type":"ContainerStarted","Data":"760d1e43dc7b1fa2240d59854a828293311553285ca45b45a504fa1079f02056"} Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.444528 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b5fd97b4f-mnlzd"] Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.449564 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b5fd97b4f-mnlzd"] Mar 12 16:12:01 crc kubenswrapper[4687]: I0312 16:12:01.760688 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" path="/var/lib/kubelet/pods/1ad8036e-ef57-4f87-98fb-afeb9ff0bea0/volumes" Mar 12 16:12:02 crc kubenswrapper[4687]: I0312 16:12:02.408203 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" event={"ID":"c209e8d9-555d-4362-a44a-0279015e41f1","Type":"ContainerStarted","Data":"deafa1508404f6ad3f2053e701f89af1a973ad59e59ac5b836a915e9a2b75f22"} Mar 12 16:12:02 crc kubenswrapper[4687]: I0312 16:12:02.431499 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" podStartSLOduration=1.370399366 podStartE2EDuration="2.431460589s" podCreationTimestamp="2026-03-12 16:12:00 +0000 UTC" firstStartedPulling="2026-03-12 16:12:00.927917306 +0000 UTC m=+569.891879660" lastFinishedPulling="2026-03-12 16:12:01.988978519 +0000 UTC m=+570.952940883" observedRunningTime="2026-03-12 16:12:02.423956027 +0000 UTC m=+571.387918381" watchObservedRunningTime="2026-03-12 16:12:02.431460589 +0000 UTC m=+571.395422993" Mar 12 16:12:03 crc kubenswrapper[4687]: I0312 16:12:03.418741 4687 generic.go:334] "Generic (PLEG): container finished" podID="c209e8d9-555d-4362-a44a-0279015e41f1" containerID="deafa1508404f6ad3f2053e701f89af1a973ad59e59ac5b836a915e9a2b75f22" exitCode=0 Mar 12 16:12:03 crc kubenswrapper[4687]: I0312 16:12:03.418781 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" event={"ID":"c209e8d9-555d-4362-a44a-0279015e41f1","Type":"ContainerDied","Data":"deafa1508404f6ad3f2053e701f89af1a973ad59e59ac5b836a915e9a2b75f22"} Mar 12 16:12:04 crc kubenswrapper[4687]: I0312 16:12:04.724402 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:04 crc kubenswrapper[4687]: I0312 16:12:04.790826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76gf5\" (UniqueName: \"kubernetes.io/projected/c209e8d9-555d-4362-a44a-0279015e41f1-kube-api-access-76gf5\") pod \"c209e8d9-555d-4362-a44a-0279015e41f1\" (UID: \"c209e8d9-555d-4362-a44a-0279015e41f1\") " Mar 12 16:12:04 crc kubenswrapper[4687]: I0312 16:12:04.799826 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c209e8d9-555d-4362-a44a-0279015e41f1-kube-api-access-76gf5" (OuterVolumeSpecName: "kube-api-access-76gf5") pod "c209e8d9-555d-4362-a44a-0279015e41f1" (UID: "c209e8d9-555d-4362-a44a-0279015e41f1"). InnerVolumeSpecName "kube-api-access-76gf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:12:04 crc kubenswrapper[4687]: I0312 16:12:04.832007 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555526-j8xjw"] Mar 12 16:12:04 crc kubenswrapper[4687]: I0312 16:12:04.836492 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555526-j8xjw"] Mar 12 16:12:04 crc kubenswrapper[4687]: I0312 16:12:04.892068 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76gf5\" (UniqueName: \"kubernetes.io/projected/c209e8d9-555d-4362-a44a-0279015e41f1-kube-api-access-76gf5\") on node \"crc\" DevicePath \"\"" Mar 12 16:12:05 crc kubenswrapper[4687]: I0312 16:12:05.439907 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" event={"ID":"c209e8d9-555d-4362-a44a-0279015e41f1","Type":"ContainerDied","Data":"760d1e43dc7b1fa2240d59854a828293311553285ca45b45a504fa1079f02056"} Mar 12 16:12:05 crc kubenswrapper[4687]: I0312 16:12:05.439958 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760d1e43dc7b1fa2240d59854a828293311553285ca45b45a504fa1079f02056" Mar 12 16:12:05 crc kubenswrapper[4687]: I0312 16:12:05.439963 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555532-dxrn2" Mar 12 16:12:05 crc kubenswrapper[4687]: I0312 16:12:05.741049 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d7f52a-a823-4b24-a4f6-740b496c4540" path="/var/lib/kubelet/pods/93d7f52a-a823-4b24-a4f6-740b496c4540/volumes" Mar 12 16:12:14 crc kubenswrapper[4687]: I0312 16:12:14.122110 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:12:14 crc kubenswrapper[4687]: I0312 16:12:14.122566 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:12:44 crc kubenswrapper[4687]: I0312 16:12:44.121877 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:12:44 crc kubenswrapper[4687]: I0312 16:12:44.122537 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:12:57 crc kubenswrapper[4687]: I0312 16:12:57.031574 4687 scope.go:117] "RemoveContainer" containerID="cf4c4148630e5e513b65bdc907eb35681fcf137aee8e26bdd774daae8364a56c" Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.121944 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.122409 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.122458 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.123108 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9433415f064106c8d16d6be7a69dee0d049ec5465c0e65f1afbe31077aa75da"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.123174 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://b9433415f064106c8d16d6be7a69dee0d049ec5465c0e65f1afbe31077aa75da" gracePeriod=600 Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.924291 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="b9433415f064106c8d16d6be7a69dee0d049ec5465c0e65f1afbe31077aa75da" exitCode=0 Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.924341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"b9433415f064106c8d16d6be7a69dee0d049ec5465c0e65f1afbe31077aa75da"} Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.924721 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"8e641712db1a1962856f1bc8834ef5662c58059d18ba81c47f6d78c93a0e3f14"} Mar 12 16:13:14 crc kubenswrapper[4687]: I0312 16:13:14.924750 4687 scope.go:117] "RemoveContainer" containerID="ca9af5a60d7edd2ac85d52bd0521eb3c6c568aec927cc166905e24d8ea4a0e8a" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.840632 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p"] Mar 12 16:13:52 crc kubenswrapper[4687]: E0312 16:13:52.841290 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" containerName="console" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.841303 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" containerName="console" Mar 12 16:13:52 crc kubenswrapper[4687]: E0312 16:13:52.841316 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c209e8d9-555d-4362-a44a-0279015e41f1" containerName="oc" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.841321 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c209e8d9-555d-4362-a44a-0279015e41f1" containerName="oc" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.841464 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c209e8d9-555d-4362-a44a-0279015e41f1" containerName="oc" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.841483 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad8036e-ef57-4f87-98fb-afeb9ff0bea0" containerName="console" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.842227 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.849993 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.854972 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p"] Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.884293 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.884509 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.884634 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsv7m\" (UniqueName: \"kubernetes.io/projected/7dffa69e-cac6-4faf-9808-ea2216e95faa-kube-api-access-vsv7m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.986467 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.986532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.986582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsv7m\" (UniqueName: \"kubernetes.io/projected/7dffa69e-cac6-4faf-9808-ea2216e95faa-kube-api-access-vsv7m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.987068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:52 crc kubenswrapper[4687]: I0312 16:13:52.987155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:53 crc kubenswrapper[4687]: I0312 16:13:53.005594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsv7m\" (UniqueName: \"kubernetes.io/projected/7dffa69e-cac6-4faf-9808-ea2216e95faa-kube-api-access-vsv7m\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:53 crc kubenswrapper[4687]: I0312 16:13:53.200538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:53 crc kubenswrapper[4687]: I0312 16:13:53.397050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p"] Mar 12 16:13:54 crc kubenswrapper[4687]: I0312 16:13:54.172920 4687 generic.go:334] "Generic (PLEG): container finished" podID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerID="195055da3940cee2cf6351df182add9f0da28e9b4e27ab2b4260e16cab0c12c1" exitCode=0 Mar 12 16:13:54 crc kubenswrapper[4687]: I0312 16:13:54.173028 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" event={"ID":"7dffa69e-cac6-4faf-9808-ea2216e95faa","Type":"ContainerDied","Data":"195055da3940cee2cf6351df182add9f0da28e9b4e27ab2b4260e16cab0c12c1"} Mar 12 16:13:54 crc kubenswrapper[4687]: I0312 16:13:54.173221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" event={"ID":"7dffa69e-cac6-4faf-9808-ea2216e95faa","Type":"ContainerStarted","Data":"99111758208fa29b7ffeae2a7e07b2e6ec9e4096dcbb26a1721742b23e1027cf"} Mar 12 16:13:55 crc kubenswrapper[4687]: E0312 16:13:55.698681 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dffa69e_cac6_4faf_9808_ea2216e95faa.slice/crio-conmon-c786eeda10150edcdd20774cb0b445abd87b33a75c6fe92f5e27fd7dcbdc3ab0.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:13:56 crc kubenswrapper[4687]: I0312 16:13:56.187715 4687 generic.go:334] "Generic (PLEG): container finished" podID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerID="c786eeda10150edcdd20774cb0b445abd87b33a75c6fe92f5e27fd7dcbdc3ab0" exitCode=0 Mar 12 16:13:56 crc kubenswrapper[4687]: I0312 16:13:56.187753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" event={"ID":"7dffa69e-cac6-4faf-9808-ea2216e95faa","Type":"ContainerDied","Data":"c786eeda10150edcdd20774cb0b445abd87b33a75c6fe92f5e27fd7dcbdc3ab0"} Mar 12 16:13:57 crc kubenswrapper[4687]: I0312 16:13:57.197296 4687 generic.go:334] "Generic (PLEG): container finished" podID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerID="6add971b446c735eb6777f3d25e5dfba7dbf643ec71bad4ada6142a761214d2b" exitCode=0 Mar 12 16:13:57 crc kubenswrapper[4687]: I0312 16:13:57.197343 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" event={"ID":"7dffa69e-cac6-4faf-9808-ea2216e95faa","Type":"ContainerDied","Data":"6add971b446c735eb6777f3d25e5dfba7dbf643ec71bad4ada6142a761214d2b"} Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.471005 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.662905 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsv7m\" (UniqueName: \"kubernetes.io/projected/7dffa69e-cac6-4faf-9808-ea2216e95faa-kube-api-access-vsv7m\") pod \"7dffa69e-cac6-4faf-9808-ea2216e95faa\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.664012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-bundle\") pod \"7dffa69e-cac6-4faf-9808-ea2216e95faa\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.664199 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-util\") pod \"7dffa69e-cac6-4faf-9808-ea2216e95faa\" (UID: \"7dffa69e-cac6-4faf-9808-ea2216e95faa\") " Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.666632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-bundle" (OuterVolumeSpecName: "bundle") pod "7dffa69e-cac6-4faf-9808-ea2216e95faa" (UID: "7dffa69e-cac6-4faf-9808-ea2216e95faa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.668980 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dffa69e-cac6-4faf-9808-ea2216e95faa-kube-api-access-vsv7m" (OuterVolumeSpecName: "kube-api-access-vsv7m") pod "7dffa69e-cac6-4faf-9808-ea2216e95faa" (UID: "7dffa69e-cac6-4faf-9808-ea2216e95faa"). InnerVolumeSpecName "kube-api-access-vsv7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.676883 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-util" (OuterVolumeSpecName: "util") pod "7dffa69e-cac6-4faf-9808-ea2216e95faa" (UID: "7dffa69e-cac6-4faf-9808-ea2216e95faa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.766553 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.766600 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7dffa69e-cac6-4faf-9808-ea2216e95faa-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:13:58 crc kubenswrapper[4687]: I0312 16:13:58.766618 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsv7m\" (UniqueName: \"kubernetes.io/projected/7dffa69e-cac6-4faf-9808-ea2216e95faa-kube-api-access-vsv7m\") on node \"crc\" DevicePath \"\"" Mar 12 16:13:59 crc kubenswrapper[4687]: I0312 16:13:59.212175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" event={"ID":"7dffa69e-cac6-4faf-9808-ea2216e95faa","Type":"ContainerDied","Data":"99111758208fa29b7ffeae2a7e07b2e6ec9e4096dcbb26a1721742b23e1027cf"} Mar 12 16:13:59 crc kubenswrapper[4687]: I0312 16:13:59.212216 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99111758208fa29b7ffeae2a7e07b2e6ec9e4096dcbb26a1721742b23e1027cf" Mar 12 16:13:59 crc kubenswrapper[4687]: I0312 16:13:59.212261 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.129685 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555534-vs8t2"] Mar 12 16:14:00 crc kubenswrapper[4687]: E0312 16:14:00.130211 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="pull" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.130226 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="pull" Mar 12 16:14:00 crc kubenswrapper[4687]: E0312 16:14:00.130248 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="util" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.130257 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="util" Mar 12 16:14:00 crc kubenswrapper[4687]: E0312 16:14:00.130266 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="extract" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.130275 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="extract" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.130462 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dffa69e-cac6-4faf-9808-ea2216e95faa" containerName="extract" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.130970 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.133269 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.133796 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.139759 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.142339 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555534-vs8t2"] Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.176250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktgv\" (UniqueName: \"kubernetes.io/projected/7909e1ad-45be-45cc-83d9-8efd939918c8-kube-api-access-6ktgv\") pod \"auto-csr-approver-29555534-vs8t2\" (UID: \"7909e1ad-45be-45cc-83d9-8efd939918c8\") " pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.277381 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ktgv\" (UniqueName: \"kubernetes.io/projected/7909e1ad-45be-45cc-83d9-8efd939918c8-kube-api-access-6ktgv\") pod \"auto-csr-approver-29555534-vs8t2\" (UID: \"7909e1ad-45be-45cc-83d9-8efd939918c8\") " pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.293802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ktgv\" (UniqueName: \"kubernetes.io/projected/7909e1ad-45be-45cc-83d9-8efd939918c8-kube-api-access-6ktgv\") pod \"auto-csr-approver-29555534-vs8t2\" (UID: \"7909e1ad-45be-45cc-83d9-8efd939918c8\") " pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.444234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:00 crc kubenswrapper[4687]: I0312 16:14:00.853180 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555534-vs8t2"] Mar 12 16:14:01 crc kubenswrapper[4687]: I0312 16:14:01.225131 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" event={"ID":"7909e1ad-45be-45cc-83d9-8efd939918c8","Type":"ContainerStarted","Data":"276df572ea93d58b9427e160b0e19d214ab4a871159315c86807c34e95d61bc1"} Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.235881 4687 generic.go:334] "Generic (PLEG): container finished" podID="7909e1ad-45be-45cc-83d9-8efd939918c8" containerID="4ba53f969ae490d513ef540bde2e74409bf5bbead56c4785d9982033eff2fe9b" exitCode=0 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.235930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" event={"ID":"7909e1ad-45be-45cc-83d9-8efd939918c8","Type":"ContainerDied","Data":"4ba53f969ae490d513ef540bde2e74409bf5bbead56c4785d9982033eff2fe9b"} Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.939725 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nj99p"] Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940094 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-controller" containerID="cri-o://475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" gracePeriod=30 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940540 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="sbdb" containerID="cri-o://2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" gracePeriod=30 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940596 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="nbdb" containerID="cri-o://6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" gracePeriod=30 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940654 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="northd" containerID="cri-o://56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" gracePeriod=30 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940712 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" gracePeriod=30 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940763 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-node" containerID="cri-o://13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" gracePeriod=30 Mar 12 16:14:03 crc kubenswrapper[4687]: I0312 16:14:03.940794 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-acl-logging" containerID="cri-o://137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" gracePeriod=30 Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.031618 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovnkube-controller" containerID="cri-o://1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" gracePeriod=30 Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.102552 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.111527 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.111649 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.112865 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.112900 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="sbdb" Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.117634 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.121009 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 16:14:04 crc kubenswrapper[4687]: E0312 16:14:04.121067 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="nbdb" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.249991 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nj99p_b3ee11e6-3caf-46f7-8321-84633755d718/ovn-acl-logging/0.log" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.253892 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nj99p_b3ee11e6-3caf-46f7-8321-84633755d718/ovn-controller/0.log" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259795 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" exitCode=0 Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259823 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" exitCode=0 Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259831 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" exitCode=143 Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259839 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" exitCode=143 Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9"} Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259919 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370"} Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259934 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834"} Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.259943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90"} Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.269875 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9k44w_bc5d5523-b52d-4739-9a22-3abb886d7f0d/kube-multus/0.log" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.269922 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc5d5523-b52d-4739-9a22-3abb886d7f0d" containerID="c04597142928391fc8bc391ee9d35898524a2f8a0fe45472693db2975801d784" exitCode=2 Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.270112 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9k44w" event={"ID":"bc5d5523-b52d-4739-9a22-3abb886d7f0d","Type":"ContainerDied","Data":"c04597142928391fc8bc391ee9d35898524a2f8a0fe45472693db2975801d784"} Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.270604 4687 scope.go:117] "RemoveContainer" containerID="c04597142928391fc8bc391ee9d35898524a2f8a0fe45472693db2975801d784" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.434824 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.537859 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ktgv\" (UniqueName: \"kubernetes.io/projected/7909e1ad-45be-45cc-83d9-8efd939918c8-kube-api-access-6ktgv\") pod \"7909e1ad-45be-45cc-83d9-8efd939918c8\" (UID: \"7909e1ad-45be-45cc-83d9-8efd939918c8\") " Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.543826 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7909e1ad-45be-45cc-83d9-8efd939918c8-kube-api-access-6ktgv" (OuterVolumeSpecName: "kube-api-access-6ktgv") pod "7909e1ad-45be-45cc-83d9-8efd939918c8" (UID: "7909e1ad-45be-45cc-83d9-8efd939918c8"). InnerVolumeSpecName "kube-api-access-6ktgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:14:04 crc kubenswrapper[4687]: I0312 16:14:04.639477 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ktgv\" (UniqueName: \"kubernetes.io/projected/7909e1ad-45be-45cc-83d9-8efd939918c8-kube-api-access-6ktgv\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.144292 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nj99p_b3ee11e6-3caf-46f7-8321-84633755d718/ovn-acl-logging/0.log" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.145311 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nj99p_b3ee11e6-3caf-46f7-8321-84633755d718/ovn-controller/0.log" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.145794 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.217759 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dmvhk"] Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.217964 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-node" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.217976 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-node" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.217989 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-controller" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.217997 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-controller" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218007 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="sbdb" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218013 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="sbdb" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218022 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7909e1ad-45be-45cc-83d9-8efd939918c8" containerName="oc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218027 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7909e1ad-45be-45cc-83d9-8efd939918c8" containerName="oc" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218034 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="northd" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218040 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="northd" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218050 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kubecfg-setup" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218055 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kubecfg-setup" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218064 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-acl-logging" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218070 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-acl-logging" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218078 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovnkube-controller" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218083 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovnkube-controller" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218093 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218099 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.218111 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="nbdb" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218117 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="nbdb" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218210 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovnkube-controller" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218221 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-node" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218231 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7909e1ad-45be-45cc-83d9-8efd939918c8" containerName="oc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218239 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="northd" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218245 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="nbdb" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218251 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="sbdb" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218260 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-controller" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218267 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="ovn-acl-logging" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.218277 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.220151 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247103 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-var-lib-openvswitch\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247159 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247200 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-systemd\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247228 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-openvswitch\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247281 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-systemd-units\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247311 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-node-log\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247337 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-ovn-kubernetes\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-netns\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-ovn\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247452 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfdq\" (UniqueName: \"kubernetes.io/projected/b3ee11e6-3caf-46f7-8321-84633755d718-kube-api-access-kqfdq\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247483 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-log-socket\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-script-lib\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247532 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-netd\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247552 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-kubelet\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247575 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-etc-openvswitch\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-bin\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247632 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ee11e6-3caf-46f7-8321-84633755d718-ovn-node-metrics-cert\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247656 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-config\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247687 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-env-overrides\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.247750 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-slash\") pod \"b3ee11e6-3caf-46f7-8321-84633755d718\" (UID: \"b3ee11e6-3caf-46f7-8321-84633755d718\") " Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248077 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-slash" (OuterVolumeSpecName: "host-slash") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248123 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248150 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248474 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248624 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-node-log" (OuterVolumeSpecName: "node-log") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248884 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.248924 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-log-socket" (OuterVolumeSpecName: "log-socket") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.249099 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.249115 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.249193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.249415 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.249438 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.249448 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.253620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ee11e6-3caf-46f7-8321-84633755d718-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.253618 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ee11e6-3caf-46f7-8321-84633755d718-kube-api-access-kqfdq" (OuterVolumeSpecName: "kube-api-access-kqfdq") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "kube-api-access-kqfdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.281302 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9k44w_bc5d5523-b52d-4739-9a22-3abb886d7f0d/kube-multus/0.log" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.281432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9k44w" event={"ID":"bc5d5523-b52d-4739-9a22-3abb886d7f0d","Type":"ContainerStarted","Data":"6c2c8f5f947d188ea83ad1ba781eb1ea7c6fa4f27975537d8e524d36c969b29a"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.282469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b3ee11e6-3caf-46f7-8321-84633755d718" (UID: "b3ee11e6-3caf-46f7-8321-84633755d718"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.285910 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nj99p_b3ee11e6-3caf-46f7-8321-84633755d718/ovn-acl-logging/0.log" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286282 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nj99p_b3ee11e6-3caf-46f7-8321-84633755d718/ovn-controller/0.log" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286555 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" exitCode=0 Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286575 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" exitCode=0 Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286585 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" exitCode=0 Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286592 4687 generic.go:334] "Generic (PLEG): container finished" podID="b3ee11e6-3caf-46f7-8321-84633755d718" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" exitCode=0 Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286650 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286678 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" event={"ID":"b3ee11e6-3caf-46f7-8321-84633755d718","Type":"ContainerDied","Data":"51d6ef96f5fbddf92d9d64080adc939e996c4b3ee4e8b50de71cd7ba44902610"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286693 4687 scope.go:117] "RemoveContainer" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.286807 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj99p" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.293999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" event={"ID":"7909e1ad-45be-45cc-83d9-8efd939918c8","Type":"ContainerDied","Data":"276df572ea93d58b9427e160b0e19d214ab4a871159315c86807c34e95d61bc1"} Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.294031 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276df572ea93d58b9427e160b0e19d214ab4a871159315c86807c34e95d61bc1" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.294074 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555534-vs8t2" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.319112 4687 scope.go:117] "RemoveContainer" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.339484 4687 scope.go:117] "RemoveContainer" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.341158 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nj99p"] Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-run-ovn-kubernetes\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349201 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-cni-bin\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349226 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-ovn\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349245 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-env-overrides\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-slash\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-run-netns\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349311 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-cni-netd\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349354 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-systemd\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349406 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fwrk\" (UniqueName: \"kubernetes.io/projected/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-kube-api-access-5fwrk\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349424 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-systemd-units\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349464 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovn-node-metrics-cert\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349500 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-var-lib-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-etc-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovnkube-config\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349571 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-log-socket\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovnkube-script-lib\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349614 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-node-log\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-kubelet\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349670 4687 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-log-socket\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349680 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349689 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349698 4687 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349709 4687 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349719 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349732 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3ee11e6-3caf-46f7-8321-84633755d718-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349742 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349752 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3ee11e6-3caf-46f7-8321-84633755d718-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349762 4687 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-slash\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349773 4687 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349782 4687 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349795 4687 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349804 4687 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349812 4687 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349820 4687 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-node-log\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349832 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349843 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349853 4687 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3ee11e6-3caf-46f7-8321-84633755d718-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.349863 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqfdq\" (UniqueName: \"kubernetes.io/projected/b3ee11e6-3caf-46f7-8321-84633755d718-kube-api-access-kqfdq\") on node \"crc\" DevicePath \"\"" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.350326 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nj99p"] Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.353314 4687 scope.go:117] "RemoveContainer" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.373542 4687 scope.go:117] "RemoveContainer" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.394034 4687 scope.go:117] "RemoveContainer" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.410313 4687 scope.go:117] "RemoveContainer" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.434714 4687 scope.go:117] "RemoveContainer" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-var-lib-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451570 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-etc-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451599 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovnkube-config\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451651 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-log-socket\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovnkube-script-lib\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-node-log\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451756 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-log-socket\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451747 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-etc-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-kubelet\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-node-log\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-kubelet\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-run-ovn-kubernetes\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451965 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-cni-bin\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.451982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-ovn\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452024 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-env-overrides\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-slash\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-run-netns\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452121 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-cni-netd\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452177 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-cni-bin\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452193 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-systemd\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452253 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-ovn\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-cni-netd\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-slash\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452313 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-run-netns\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452319 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-systemd\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fwrk\" (UniqueName: \"kubernetes.io/projected/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-kube-api-access-5fwrk\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452412 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-systemd-units\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovn-node-metrics-cert\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452631 4687 scope.go:117] "RemoveContainer" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-run-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452784 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-systemd-units\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.452907 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-host-run-ovn-kubernetes\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.453167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-env-overrides\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.453215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovnkube-script-lib\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.453220 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovnkube-config\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.453246 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-var-lib-openvswitch\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.455284 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-ovn-node-metrics-cert\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.472957 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fwrk\" (UniqueName: \"kubernetes.io/projected/94a076eb-c02b-467b-bdc2-7a33ba3ec8a1-kube-api-access-5fwrk\") pod \"ovnkube-node-dmvhk\" (UID: \"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.499289 4687 scope.go:117] "RemoveContainer" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.499855 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": container with ID starting with 1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9 not found: ID does not exist" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.499890 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9"} err="failed to get container status \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": rpc error: code = NotFound desc = could not find container \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": container with ID starting with 1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.499911 4687 scope.go:117] "RemoveContainer" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.500272 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": container with ID starting with 2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370 not found: ID does not exist" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.500323 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370"} err="failed to get container status \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": rpc error: code = NotFound desc = could not find container \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": container with ID starting with 2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.500349 4687 scope.go:117] "RemoveContainer" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.500630 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": container with ID starting with 6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b not found: ID does not exist" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.500662 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b"} err="failed to get container status \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": rpc error: code = NotFound desc = could not find container \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": container with ID starting with 6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.500680 4687 scope.go:117] "RemoveContainer" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.500891 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": container with ID starting with 56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64 not found: ID does not exist" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.500915 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64"} err="failed to get container status \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": rpc error: code = NotFound desc = could not find container \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": container with ID starting with 56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.500929 4687 scope.go:117] "RemoveContainer" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.501144 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": container with ID starting with cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c not found: ID does not exist" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501163 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c"} err="failed to get container status \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": rpc error: code = NotFound desc = could not find container \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": container with ID starting with cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501179 4687 scope.go:117] "RemoveContainer" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.501437 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": container with ID starting with 13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc not found: ID does not exist" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501461 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc"} err="failed to get container status \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": rpc error: code = NotFound desc = could not find container \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": container with ID starting with 13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501475 4687 scope.go:117] "RemoveContainer" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.501678 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": container with ID starting with 137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834 not found: ID does not exist" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501698 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834"} err="failed to get container status \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": rpc error: code = NotFound desc = could not find container \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": container with ID starting with 137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501714 4687 scope.go:117] "RemoveContainer" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.501908 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": container with ID starting with 475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90 not found: ID does not exist" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501934 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90"} err="failed to get container status \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": rpc error: code = NotFound desc = could not find container \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": container with ID starting with 475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.501983 4687 scope.go:117] "RemoveContainer" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" Mar 12 16:14:05 crc kubenswrapper[4687]: E0312 16:14:05.502305 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": container with ID starting with d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1 not found: ID does not exist" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.502349 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1"} err="failed to get container status \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": rpc error: code = NotFound desc = could not find container \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": container with ID starting with d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.502382 4687 scope.go:117] "RemoveContainer" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.502631 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9"} err="failed to get container status \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": rpc error: code = NotFound desc = could not find container \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": container with ID starting with 1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.502650 4687 scope.go:117] "RemoveContainer" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.502892 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370"} err="failed to get container status \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": rpc error: code = NotFound desc = could not find container \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": container with ID starting with 2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.502910 4687 scope.go:117] "RemoveContainer" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503121 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b"} err="failed to get container status \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": rpc error: code = NotFound desc = could not find container \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": container with ID starting with 6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503143 4687 scope.go:117] "RemoveContainer" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503412 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64"} err="failed to get container status \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": rpc error: code = NotFound desc = could not find container \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": container with ID starting with 56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503432 4687 scope.go:117] "RemoveContainer" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503682 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c"} err="failed to get container status \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": rpc error: code = NotFound desc = could not find container \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": container with ID starting with cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503701 4687 scope.go:117] "RemoveContainer" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503883 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc"} err="failed to get container status \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": rpc error: code = NotFound desc = could not find container \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": container with ID starting with 13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.503900 4687 scope.go:117] "RemoveContainer" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.504214 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834"} err="failed to get container status \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": rpc error: code = NotFound desc = could not find container \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": container with ID starting with 137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.504233 4687 scope.go:117] "RemoveContainer" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.504822 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90"} err="failed to get container status \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": rpc error: code = NotFound desc = could not find container \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": container with ID starting with 475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.504846 4687 scope.go:117] "RemoveContainer" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.505057 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1"} err="failed to get container status \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": rpc error: code = NotFound desc = could not find container \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": container with ID starting with d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.505074 4687 scope.go:117] "RemoveContainer" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.505315 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9"} err="failed to get container status \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": rpc error: code = NotFound desc = could not find container \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": container with ID starting with 1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.505341 4687 scope.go:117] "RemoveContainer" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.507570 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370"} err="failed to get container status \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": rpc error: code = NotFound desc = could not find container \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": container with ID starting with 2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.507643 4687 scope.go:117] "RemoveContainer" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.508010 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b"} err="failed to get container status \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": rpc error: code = NotFound desc = could not find container \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": container with ID starting with 6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.508049 4687 scope.go:117] "RemoveContainer" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.508494 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64"} err="failed to get container status \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": rpc error: code = NotFound desc = could not find container \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": container with ID starting with 56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.508519 4687 scope.go:117] "RemoveContainer" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.508886 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c"} err="failed to get container status \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": rpc error: code = NotFound desc = could not find container \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": container with ID starting with cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.508913 4687 scope.go:117] "RemoveContainer" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.509213 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc"} err="failed to get container status \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": rpc error: code = NotFound desc = could not find container \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": container with ID starting with 13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.509239 4687 scope.go:117] "RemoveContainer" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.509535 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834"} err="failed to get container status \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": rpc error: code = NotFound desc = could not find container \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": container with ID starting with 137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.509556 4687 scope.go:117] "RemoveContainer" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.509828 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90"} err="failed to get container status \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": rpc error: code = NotFound desc = could not find container \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": container with ID starting with 475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.509850 4687 scope.go:117] "RemoveContainer" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510109 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1"} err="failed to get container status \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": rpc error: code = NotFound desc = could not find container \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": container with ID starting with d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510144 4687 scope.go:117] "RemoveContainer" containerID="1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510414 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9"} err="failed to get container status \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": rpc error: code = NotFound desc = could not find container \"1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9\": container with ID starting with 1c05e2572c2824696046755db8108b0ae74801bcd4c6eacd8d09bbb14adf69c9 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510442 4687 scope.go:117] "RemoveContainer" containerID="2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510737 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370"} err="failed to get container status \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": rpc error: code = NotFound desc = could not find container \"2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370\": container with ID starting with 2f5bc5bb3b67c2b82a285dba9d2cb497ce50d910d14c392e173f6093f2618370 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510754 4687 scope.go:117] "RemoveContainer" containerID="6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.510984 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b"} err="failed to get container status \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": rpc error: code = NotFound desc = could not find container \"6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b\": container with ID starting with 6d81035877735a4360ec23a50bbb5133171987052fcb7936670675c08f67af4b not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511004 4687 scope.go:117] "RemoveContainer" containerID="56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511303 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64"} err="failed to get container status \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": rpc error: code = NotFound desc = could not find container \"56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64\": container with ID starting with 56496ae7c65ff9ea6fb2144d471e8207d491f3595d791a06690e69790bd3bc64 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511329 4687 scope.go:117] "RemoveContainer" containerID="cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511622 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c"} err="failed to get container status \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": rpc error: code = NotFound desc = could not find container \"cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c\": container with ID starting with cabf96a1533805b5c0e11c73a2e23f56171e19578a6e95668cc2e1188bc4da1c not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511642 4687 scope.go:117] "RemoveContainer" containerID="13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511889 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc"} err="failed to get container status \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": rpc error: code = NotFound desc = could not find container \"13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc\": container with ID starting with 13863f101b40486306d2093d25fab1e61ba47bfa123b6ca4e0e5a6a53bd30ccc not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.511920 4687 scope.go:117] "RemoveContainer" containerID="137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.512181 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834"} err="failed to get container status \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": rpc error: code = NotFound desc = could not find container \"137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834\": container with ID starting with 137ebf0ca99f9aafcb8b618d8f7abab8e15f7154d6cdb558daab1e7a5737a834 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.512202 4687 scope.go:117] "RemoveContainer" containerID="475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.512442 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90"} err="failed to get container status \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": rpc error: code = NotFound desc = could not find container \"475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90\": container with ID starting with 475d09a503ad24921fcaed8e4e907834393a52945630087d1fabca6cde1f1a90 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.512466 4687 scope.go:117] "RemoveContainer" containerID="d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.512688 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1"} err="failed to get container status \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": rpc error: code = NotFound desc = could not find container \"d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1\": container with ID starting with d710cab9d175dd9f70b581850ab14feb650f6c1f5943fb5bc3ccaf90195e6ac1 not found: ID does not exist" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.526402 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555528-7r7ld"] Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.529966 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555528-7r7ld"] Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.536282 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:05 crc kubenswrapper[4687]: W0312 16:14:05.562228 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94a076eb_c02b_467b_bdc2_7a33ba3ec8a1.slice/crio-2e2399c5f3cb21ad2ec24eac534f9fcb8cb94f84d02c3980cc3bcbcfc86c48d9 WatchSource:0}: Error finding container 2e2399c5f3cb21ad2ec24eac534f9fcb8cb94f84d02c3980cc3bcbcfc86c48d9: Status 404 returned error can't find the container with id 2e2399c5f3cb21ad2ec24eac534f9fcb8cb94f84d02c3980cc3bcbcfc86c48d9 Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.740890 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ee4eba-7b39-4d7e-acf0-b5dad82088d2" path="/var/lib/kubelet/pods/08ee4eba-7b39-4d7e-acf0-b5dad82088d2/volumes" Mar 12 16:14:05 crc kubenswrapper[4687]: I0312 16:14:05.741613 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ee11e6-3caf-46f7-8321-84633755d718" path="/var/lib/kubelet/pods/b3ee11e6-3caf-46f7-8321-84633755d718/volumes" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.320289 4687 generic.go:334] "Generic (PLEG): container finished" podID="94a076eb-c02b-467b-bdc2-7a33ba3ec8a1" containerID="ec5ca5b0650c469137dab0f39830fd6c8dad63a165ac34604afb4bacbd65940a" exitCode=0 Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.320555 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerDied","Data":"ec5ca5b0650c469137dab0f39830fd6c8dad63a165ac34604afb4bacbd65940a"} Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.320577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"2e2399c5f3cb21ad2ec24eac534f9fcb8cb94f84d02c3980cc3bcbcfc86c48d9"} Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.947350 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2"] Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.948424 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.950300 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.950700 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.950705 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-gvt9r" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.995862 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7"] Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.996608 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.998089 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-q2v7p" Mar 12 16:14:06 crc kubenswrapper[4687]: I0312 16:14:06.998156 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.003887 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn"] Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.004670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.077492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vtn\" (UniqueName: \"kubernetes.io/projected/f97d4a45-34b0-4192-a7cb-05d23d1b614d-kube-api-access-w4vtn\") pod \"obo-prometheus-operator-68bc856cb9-sb8w2\" (UID: \"f97d4a45-34b0-4192-a7cb-05d23d1b614d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.174231 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tbpsw"] Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.175063 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.179007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vtn\" (UniqueName: \"kubernetes.io/projected/f97d4a45-34b0-4192-a7cb-05d23d1b614d-kube-api-access-w4vtn\") pod \"obo-prometheus-operator-68bc856cb9-sb8w2\" (UID: \"f97d4a45-34b0-4192-a7cb-05d23d1b614d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.179098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e487d62-c210-4aa0-b5a2-371bcf18cad5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7\" (UID: \"9e487d62-c210-4aa0-b5a2-371bcf18cad5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.179145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e487d62-c210-4aa0-b5a2-371bcf18cad5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7\" (UID: \"9e487d62-c210-4aa0-b5a2-371bcf18cad5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.179171 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn\" (UID: \"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.179267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn\" (UID: \"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.179858 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.180027 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-62lhj" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.209341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vtn\" (UniqueName: \"kubernetes.io/projected/f97d4a45-34b0-4192-a7cb-05d23d1b614d-kube-api-access-w4vtn\") pod \"obo-prometheus-operator-68bc856cb9-sb8w2\" (UID: \"f97d4a45-34b0-4192-a7cb-05d23d1b614d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.262674 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.280105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnbc\" (UniqueName: \"kubernetes.io/projected/afacf716-028a-4848-a495-83f7c01a47ca-kube-api-access-fjnbc\") pod \"observability-operator-59bdc8b94-tbpsw\" (UID: \"afacf716-028a-4848-a495-83f7c01a47ca\") " pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.280181 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e487d62-c210-4aa0-b5a2-371bcf18cad5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7\" (UID: \"9e487d62-c210-4aa0-b5a2-371bcf18cad5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.280213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/afacf716-028a-4848-a495-83f7c01a47ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tbpsw\" (UID: \"afacf716-028a-4848-a495-83f7c01a47ca\") " pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.280246 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e487d62-c210-4aa0-b5a2-371bcf18cad5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7\" (UID: \"9e487d62-c210-4aa0-b5a2-371bcf18cad5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.280273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn\" (UID: \"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.280295 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn\" (UID: \"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.284416 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn\" (UID: \"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.290829 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn\" (UID: \"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.294437 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(47a8c16f13af31f243da2c78218d75bb3f83063af5fa7cc845b4a21f1873e1f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.294495 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(47a8c16f13af31f243da2c78218d75bb3f83063af5fa7cc845b4a21f1873e1f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.294515 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(47a8c16f13af31f243da2c78218d75bb3f83063af5fa7cc845b4a21f1873e1f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.294556 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators(f97d4a45-34b0-4192-a7cb-05d23d1b614d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators(f97d4a45-34b0-4192-a7cb-05d23d1b614d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(47a8c16f13af31f243da2c78218d75bb3f83063af5fa7cc845b4a21f1873e1f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" podUID="f97d4a45-34b0-4192-a7cb-05d23d1b614d" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.295939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e487d62-c210-4aa0-b5a2-371bcf18cad5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7\" (UID: \"9e487d62-c210-4aa0-b5a2-371bcf18cad5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.298843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e487d62-c210-4aa0-b5a2-371bcf18cad5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7\" (UID: \"9e487d62-c210-4aa0-b5a2-371bcf18cad5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.314556 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.323725 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.331896 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"d89ea3b2cfa574b1c41a7db7564505952c0775407a441df0945b78c38e240bf7"} Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.332089 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"f6e0cefe0ae27d2ad313ccac0d352a78033f65092d1fba03d6577a8cea9a3e31"} Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.332179 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"8eff8104662ef6e5a93dce9a581a205287521de69a6be3e81992a439f6bd8cbc"} Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.332255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"87ade75d81086971f361f22b147b58fa7e96db977f30b725da54d3f0a2cbfc26"} Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.332319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"9c43a40b38297967ee5d2704a99351bda92ff4ac58054dd43f6616f2b03c07d8"} Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.332398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"1b1e3f445b7d0688afa537a0cd87e604057a00fc7cfad59f9662d06478c1ec87"} Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.346880 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(73924810d0e469cfd6e641e4e3f324222db4a61710a094522b46993be2a03d76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.346937 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(73924810d0e469cfd6e641e4e3f324222db4a61710a094522b46993be2a03d76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.346957 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(73924810d0e469cfd6e641e4e3f324222db4a61710a094522b46993be2a03d76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.346999 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators(9e487d62-c210-4aa0-b5a2-371bcf18cad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators(9e487d62-c210-4aa0-b5a2-371bcf18cad5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(73924810d0e469cfd6e641e4e3f324222db4a61710a094522b46993be2a03d76): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" podUID="9e487d62-c210-4aa0-b5a2-371bcf18cad5" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.351410 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(06eb6867d528668dbb8be20793a9636bdad495af5bea2f75b71e96bf8584a188): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.351455 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(06eb6867d528668dbb8be20793a9636bdad495af5bea2f75b71e96bf8584a188): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.351475 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(06eb6867d528668dbb8be20793a9636bdad495af5bea2f75b71e96bf8584a188): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.351510 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators(b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators(b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(06eb6867d528668dbb8be20793a9636bdad495af5bea2f75b71e96bf8584a188): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" podUID="b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.382240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/afacf716-028a-4848-a495-83f7c01a47ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tbpsw\" (UID: \"afacf716-028a-4848-a495-83f7c01a47ca\") " pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.382389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnbc\" (UniqueName: \"kubernetes.io/projected/afacf716-028a-4848-a495-83f7c01a47ca-kube-api-access-fjnbc\") pod \"observability-operator-59bdc8b94-tbpsw\" (UID: \"afacf716-028a-4848-a495-83f7c01a47ca\") " pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.383641 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-z25z9"] Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.384420 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.387250 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-gqjf4" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.389028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/afacf716-028a-4848-a495-83f7c01a47ca-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tbpsw\" (UID: \"afacf716-028a-4848-a495-83f7c01a47ca\") " pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.407681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnbc\" (UniqueName: \"kubernetes.io/projected/afacf716-028a-4848-a495-83f7c01a47ca-kube-api-access-fjnbc\") pod \"observability-operator-59bdc8b94-tbpsw\" (UID: \"afacf716-028a-4848-a495-83f7c01a47ca\") " pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.483552 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84b4\" (UniqueName: \"kubernetes.io/projected/98e16c84-ec9c-482a-8962-ce13556ffd74-kube-api-access-b84b4\") pod \"perses-operator-5bf474d74f-z25z9\" (UID: \"98e16c84-ec9c-482a-8962-ce13556ffd74\") " pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.483636 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/98e16c84-ec9c-482a-8962-ce13556ffd74-openshift-service-ca\") pod \"perses-operator-5bf474d74f-z25z9\" (UID: \"98e16c84-ec9c-482a-8962-ce13556ffd74\") " pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.490297 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.511520 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(01717c9736784869f10bf873723a94b51bc2038715d4021779a8edcd482b7cce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.511639 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(01717c9736784869f10bf873723a94b51bc2038715d4021779a8edcd482b7cce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.511717 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(01717c9736784869f10bf873723a94b51bc2038715d4021779a8edcd482b7cce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.511808 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-tbpsw_openshift-operators(afacf716-028a-4848-a495-83f7c01a47ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-tbpsw_openshift-operators(afacf716-028a-4848-a495-83f7c01a47ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(01717c9736784869f10bf873723a94b51bc2038715d4021779a8edcd482b7cce): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.585385 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/98e16c84-ec9c-482a-8962-ce13556ffd74-openshift-service-ca\") pod \"perses-operator-5bf474d74f-z25z9\" (UID: \"98e16c84-ec9c-482a-8962-ce13556ffd74\") " pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.585498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84b4\" (UniqueName: \"kubernetes.io/projected/98e16c84-ec9c-482a-8962-ce13556ffd74-kube-api-access-b84b4\") pod \"perses-operator-5bf474d74f-z25z9\" (UID: \"98e16c84-ec9c-482a-8962-ce13556ffd74\") " pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.586283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/98e16c84-ec9c-482a-8962-ce13556ffd74-openshift-service-ca\") pod \"perses-operator-5bf474d74f-z25z9\" (UID: \"98e16c84-ec9c-482a-8962-ce13556ffd74\") " pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.607294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84b4\" (UniqueName: \"kubernetes.io/projected/98e16c84-ec9c-482a-8962-ce13556ffd74-kube-api-access-b84b4\") pod \"perses-operator-5bf474d74f-z25z9\" (UID: \"98e16c84-ec9c-482a-8962-ce13556ffd74\") " pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: I0312 16:14:07.701413 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.727288 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3c599511aae898c75c2eb901f47939a7dde925527546806c36b71d17496c407d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.727482 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3c599511aae898c75c2eb901f47939a7dde925527546806c36b71d17496c407d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.727575 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3c599511aae898c75c2eb901f47939a7dde925527546806c36b71d17496c407d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:07 crc kubenswrapper[4687]: E0312 16:14:07.727700 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-z25z9_openshift-operators(98e16c84-ec9c-482a-8962-ce13556ffd74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-z25z9_openshift-operators(98e16c84-ec9c-482a-8962-ce13556ffd74)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3c599511aae898c75c2eb901f47939a7dde925527546806c36b71d17496c407d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" podUID="98e16c84-ec9c-482a-8962-ce13556ffd74" Mar 12 16:14:09 crc kubenswrapper[4687]: I0312 16:14:09.353942 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"bf5dd9daad217b1a7f110aaa116046540b033767fb1d1f98063d151563c657f8"} Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.374295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" event={"ID":"94a076eb-c02b-467b-bdc2-7a33ba3ec8a1","Type":"ContainerStarted","Data":"19591c8308fe10c0f633413c16b6320aa5fa704cf3c076e239352638fc265d18"} Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.374838 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.401437 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.434429 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" podStartSLOduration=7.434415275 podStartE2EDuration="7.434415275s" podCreationTimestamp="2026-03-12 16:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:14:12.404754633 +0000 UTC m=+701.368716977" watchObservedRunningTime="2026-03-12 16:14:12.434415275 +0000 UTC m=+701.398377619" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.469441 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-z25z9"] Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.469600 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.470138 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.476085 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2"] Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.476228 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.476613 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.492649 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn"] Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.492800 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.493283 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.497142 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tbpsw"] Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.504711 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.505330 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.565674 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3efc1a9d9f0b721c4b69b25ab2d4f629d32baf5177600300520fb2641c40de13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.565734 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3efc1a9d9f0b721c4b69b25ab2d4f629d32baf5177600300520fb2641c40de13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.565757 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3efc1a9d9f0b721c4b69b25ab2d4f629d32baf5177600300520fb2641c40de13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.565799 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-z25z9_openshift-operators(98e16c84-ec9c-482a-8962-ce13556ffd74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-z25z9_openshift-operators(98e16c84-ec9c-482a-8962-ce13556ffd74)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-z25z9_openshift-operators_98e16c84-ec9c-482a-8962-ce13556ffd74_0(3efc1a9d9f0b721c4b69b25ab2d4f629d32baf5177600300520fb2641c40de13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" podUID="98e16c84-ec9c-482a-8962-ce13556ffd74" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.576725 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(58807ece9a12fbb2720c0d3f34110064bba10c54231388b6e0a9b4ab664c90e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.576783 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(58807ece9a12fbb2720c0d3f34110064bba10c54231388b6e0a9b4ab664c90e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.576818 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(58807ece9a12fbb2720c0d3f34110064bba10c54231388b6e0a9b4ab664c90e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.576866 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators(f97d4a45-34b0-4192-a7cb-05d23d1b614d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators(f97d4a45-34b0-4192-a7cb-05d23d1b614d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-sb8w2_openshift-operators_f97d4a45-34b0-4192-a7cb-05d23d1b614d_0(58807ece9a12fbb2720c0d3f34110064bba10c54231388b6e0a9b4ab664c90e8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" podUID="f97d4a45-34b0-4192-a7cb-05d23d1b614d" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.605861 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(55034b3628f404a6c7343d038c64d9be387df370a16b1248bfb07f0e1a8144d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.605922 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(55034b3628f404a6c7343d038c64d9be387df370a16b1248bfb07f0e1a8144d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.605942 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(55034b3628f404a6c7343d038c64d9be387df370a16b1248bfb07f0e1a8144d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.605980 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-tbpsw_openshift-operators(afacf716-028a-4848-a495-83f7c01a47ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-tbpsw_openshift-operators(afacf716-028a-4848-a495-83f7c01a47ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tbpsw_openshift-operators_afacf716-028a-4848-a495-83f7c01a47ca_0(55034b3628f404a6c7343d038c64d9be387df370a16b1248bfb07f0e1a8144d1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.606513 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(46adfad3bc0441a7a5c001f7fb92d91702b077d0ba310288fcaf45bf1ead739f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.606596 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(46adfad3bc0441a7a5c001f7fb92d91702b077d0ba310288fcaf45bf1ead739f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.606621 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(46adfad3bc0441a7a5c001f7fb92d91702b077d0ba310288fcaf45bf1ead739f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.606687 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators(b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators(b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_openshift-operators_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e_0(46adfad3bc0441a7a5c001f7fb92d91702b077d0ba310288fcaf45bf1ead739f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" podUID="b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.610995 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7"] Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.611118 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:12 crc kubenswrapper[4687]: I0312 16:14:12.611546 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.634492 4687 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(7e1e26e7ddfb224991d6109e36573aedb64364a8e37be54249d78d3b75009c86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.634572 4687 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(7e1e26e7ddfb224991d6109e36573aedb64364a8e37be54249d78d3b75009c86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.634597 4687 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(7e1e26e7ddfb224991d6109e36573aedb64364a8e37be54249d78d3b75009c86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:12 crc kubenswrapper[4687]: E0312 16:14:12.634651 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators(9e487d62-c210-4aa0-b5a2-371bcf18cad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators(9e487d62-c210-4aa0-b5a2-371bcf18cad5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_openshift-operators_9e487d62-c210-4aa0-b5a2-371bcf18cad5_0(7e1e26e7ddfb224991d6109e36573aedb64364a8e37be54249d78d3b75009c86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" podUID="9e487d62-c210-4aa0-b5a2-371bcf18cad5" Mar 12 16:14:13 crc kubenswrapper[4687]: I0312 16:14:13.380652 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:13 crc kubenswrapper[4687]: I0312 16:14:13.381014 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:13 crc kubenswrapper[4687]: I0312 16:14:13.453845 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:23 crc kubenswrapper[4687]: I0312 16:14:23.221235 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 16:14:23 crc kubenswrapper[4687]: I0312 16:14:23.732157 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:23 crc kubenswrapper[4687]: I0312 16:14:23.732498 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:23 crc kubenswrapper[4687]: I0312 16:14:23.732785 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" Mar 12 16:14:23 crc kubenswrapper[4687]: I0312 16:14:23.734490 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:24 crc kubenswrapper[4687]: I0312 16:14:24.250299 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-z25z9"] Mar 12 16:14:24 crc kubenswrapper[4687]: I0312 16:14:24.257561 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:14:24 crc kubenswrapper[4687]: I0312 16:14:24.304342 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7"] Mar 12 16:14:24 crc kubenswrapper[4687]: I0312 16:14:24.441509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" event={"ID":"9e487d62-c210-4aa0-b5a2-371bcf18cad5","Type":"ContainerStarted","Data":"fc6fa9b9eba9d60c8368be67d2cdab2822e376dc5c6725d1f5c93f4cbc0c5880"} Mar 12 16:14:24 crc kubenswrapper[4687]: I0312 16:14:24.442453 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" event={"ID":"98e16c84-ec9c-482a-8962-ce13556ffd74","Type":"ContainerStarted","Data":"337fe71882127d3f1feef929d3ac552da3aee475cade65c886279bbfbefad75b"} Mar 12 16:14:25 crc kubenswrapper[4687]: I0312 16:14:25.733267 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:25 crc kubenswrapper[4687]: I0312 16:14:25.734054 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" Mar 12 16:14:26 crc kubenswrapper[4687]: I0312 16:14:26.213216 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2"] Mar 12 16:14:26 crc kubenswrapper[4687]: I0312 16:14:26.454912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" event={"ID":"f97d4a45-34b0-4192-a7cb-05d23d1b614d","Type":"ContainerStarted","Data":"e3787dd606e5cae880e435a2ea2cb7e1e72ec8b38e155e003585885f222274be"} Mar 12 16:14:27 crc kubenswrapper[4687]: I0312 16:14:27.733639 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:27 crc kubenswrapper[4687]: I0312 16:14:27.734272 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" Mar 12 16:14:27 crc kubenswrapper[4687]: I0312 16:14:27.734652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:27 crc kubenswrapper[4687]: I0312 16:14:27.735159 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:30 crc kubenswrapper[4687]: I0312 16:14:30.166885 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tbpsw"] Mar 12 16:14:30 crc kubenswrapper[4687]: I0312 16:14:30.871979 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn"] Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.488687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" event={"ID":"afacf716-028a-4848-a495-83f7c01a47ca","Type":"ContainerStarted","Data":"abbe0784319d5753a7295c91ebba4197eccdbf4aa1f3866a8b933f2f7ee8b2e6"} Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.491886 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" event={"ID":"f97d4a45-34b0-4192-a7cb-05d23d1b614d","Type":"ContainerStarted","Data":"af6270131458ad6a8b08cf51441be2bfa15b725a776e0adca1e7684ccc0edc5c"} Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.493888 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" event={"ID":"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e","Type":"ContainerStarted","Data":"d91368fcda7aada1705087d9c8f093bd718cc4bb5693707dcfac11d3b215c27f"} Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.493932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" event={"ID":"b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e","Type":"ContainerStarted","Data":"1fc3447c40e23c088bf95551e73707f7a47c59a80a3263d9ec1111653c26227d"} Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.495962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" event={"ID":"98e16c84-ec9c-482a-8962-ce13556ffd74","Type":"ContainerStarted","Data":"ed14fe308c37d1a5f6fc4cb9bbb20345a9b2406c4d76483586b49bf8d168fe0e"} Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.496574 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.499729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" event={"ID":"9e487d62-c210-4aa0-b5a2-371bcf18cad5","Type":"ContainerStarted","Data":"487c7a15fb883b3cac2fe79018abfb028c4cdf7d6eef216d5053c3799cf6410f"} Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.521853 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sb8w2" podStartSLOduration=21.233313628 podStartE2EDuration="25.521838118s" podCreationTimestamp="2026-03-12 16:14:06 +0000 UTC" firstStartedPulling="2026-03-12 16:14:26.239993347 +0000 UTC m=+715.203955691" lastFinishedPulling="2026-03-12 16:14:30.528517837 +0000 UTC m=+719.492480181" observedRunningTime="2026-03-12 16:14:31.52002668 +0000 UTC m=+720.483989064" watchObservedRunningTime="2026-03-12 16:14:31.521838118 +0000 UTC m=+720.485800462" Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.633625 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn" podStartSLOduration=25.633606341 podStartE2EDuration="25.633606341s" podCreationTimestamp="2026-03-12 16:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:14:31.549751193 +0000 UTC m=+720.513713537" watchObservedRunningTime="2026-03-12 16:14:31.633606341 +0000 UTC m=+720.597568675" Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.635798 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7" podStartSLOduration=19.430049023 podStartE2EDuration="25.635788639s" podCreationTimestamp="2026-03-12 16:14:06 +0000 UTC" firstStartedPulling="2026-03-12 16:14:24.313261454 +0000 UTC m=+713.277223788" lastFinishedPulling="2026-03-12 16:14:30.51900106 +0000 UTC m=+719.482963404" observedRunningTime="2026-03-12 16:14:31.633465417 +0000 UTC m=+720.597427811" watchObservedRunningTime="2026-03-12 16:14:31.635788639 +0000 UTC m=+720.599750973" Mar 12 16:14:31 crc kubenswrapper[4687]: I0312 16:14:31.663305 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" podStartSLOduration=18.401342966 podStartE2EDuration="24.663287103s" podCreationTimestamp="2026-03-12 16:14:07 +0000 UTC" firstStartedPulling="2026-03-12 16:14:24.256986091 +0000 UTC m=+713.220948435" lastFinishedPulling="2026-03-12 16:14:30.518930228 +0000 UTC m=+719.482892572" observedRunningTime="2026-03-12 16:14:31.658532165 +0000 UTC m=+720.622494499" watchObservedRunningTime="2026-03-12 16:14:31.663287103 +0000 UTC m=+720.627249447" Mar 12 16:14:35 crc kubenswrapper[4687]: I0312 16:14:35.571240 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" Mar 12 16:14:36 crc kubenswrapper[4687]: I0312 16:14:36.539628 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" event={"ID":"afacf716-028a-4848-a495-83f7c01a47ca","Type":"ContainerStarted","Data":"56ab79df6442e13a88cdbc149408e2a44139201ef4aa076b527d1b193d0b36f6"} Mar 12 16:14:36 crc kubenswrapper[4687]: I0312 16:14:36.540179 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:36 crc kubenswrapper[4687]: I0312 16:14:36.543224 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" start-of-body= Mar 12 16:14:36 crc kubenswrapper[4687]: I0312 16:14:36.543288 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" Mar 12 16:14:36 crc kubenswrapper[4687]: I0312 16:14:36.603164 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podStartSLOduration=23.894230687 podStartE2EDuration="29.603126787s" podCreationTimestamp="2026-03-12 16:14:07 +0000 UTC" firstStartedPulling="2026-03-12 16:14:30.513780208 +0000 UTC m=+719.477742562" lastFinishedPulling="2026-03-12 16:14:36.222676278 +0000 UTC m=+725.186638662" observedRunningTime="2026-03-12 16:14:36.595725517 +0000 UTC m=+725.559687891" watchObservedRunningTime="2026-03-12 16:14:36.603126787 +0000 UTC m=+725.567089151" Mar 12 16:14:37 crc kubenswrapper[4687]: I0312 16:14:37.561291 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 16:14:37 crc kubenswrapper[4687]: I0312 16:14:37.705495 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.692789 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhmcz"] Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.694759 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.706621 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhmcz"] Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.768644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2tf\" (UniqueName: \"kubernetes.io/projected/d33939dc-f684-44a8-a749-58790a1155d2-kube-api-access-ln2tf\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.768702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-utilities\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.768804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-catalog-content\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.869879 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2tf\" (UniqueName: \"kubernetes.io/projected/d33939dc-f684-44a8-a749-58790a1155d2-kube-api-access-ln2tf\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.869923 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-utilities\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.869951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-catalog-content\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.870414 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-catalog-content\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.870494 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-utilities\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:39 crc kubenswrapper[4687]: I0312 16:14:39.887604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2tf\" (UniqueName: \"kubernetes.io/projected/d33939dc-f684-44a8-a749-58790a1155d2-kube-api-access-ln2tf\") pod \"redhat-operators-zhmcz\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:40 crc kubenswrapper[4687]: I0312 16:14:40.020016 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:40 crc kubenswrapper[4687]: I0312 16:14:40.654494 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhmcz"] Mar 12 16:14:41 crc kubenswrapper[4687]: I0312 16:14:41.576226 4687 generic.go:334] "Generic (PLEG): container finished" podID="d33939dc-f684-44a8-a749-58790a1155d2" containerID="e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be" exitCode=0 Mar 12 16:14:41 crc kubenswrapper[4687]: I0312 16:14:41.576327 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerDied","Data":"e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be"} Mar 12 16:14:41 crc kubenswrapper[4687]: I0312 16:14:41.577094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerStarted","Data":"390e8b3c37f44757039c3ac023b3580401fed3c6493af34752f9d30bed253fe8"} Mar 12 16:14:42 crc kubenswrapper[4687]: I0312 16:14:42.590954 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerStarted","Data":"459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd"} Mar 12 16:14:43 crc kubenswrapper[4687]: I0312 16:14:43.598747 4687 generic.go:334] "Generic (PLEG): container finished" podID="d33939dc-f684-44a8-a749-58790a1155d2" containerID="459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd" exitCode=0 Mar 12 16:14:43 crc kubenswrapper[4687]: I0312 16:14:43.598823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerDied","Data":"459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd"} Mar 12 16:14:44 crc kubenswrapper[4687]: I0312 16:14:44.607785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerStarted","Data":"1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65"} Mar 12 16:14:44 crc kubenswrapper[4687]: I0312 16:14:44.627172 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhmcz" podStartSLOduration=3.189775091 podStartE2EDuration="5.627152173s" podCreationTimestamp="2026-03-12 16:14:39 +0000 UTC" firstStartedPulling="2026-03-12 16:14:41.577560416 +0000 UTC m=+730.541522770" lastFinishedPulling="2026-03-12 16:14:44.014937508 +0000 UTC m=+732.978899852" observedRunningTime="2026-03-12 16:14:44.622625461 +0000 UTC m=+733.586587805" watchObservedRunningTime="2026-03-12 16:14:44.627152173 +0000 UTC m=+733.591114507" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.298620 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-km6xl"] Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.300160 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.302694 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.302941 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.303115 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4qsvp" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.313099 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-6qgpr"] Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.314008 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6qgpr" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.316611 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9slm2" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.336108 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4dcdp"] Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.337076 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.338681 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8tghq" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.346978 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4dcdp"] Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.354971 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hml5f\" (UniqueName: \"kubernetes.io/projected/13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7-kube-api-access-hml5f\") pod \"cert-manager-858654f9db-6qgpr\" (UID: \"13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7\") " pod="cert-manager/cert-manager-858654f9db-6qgpr" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.355011 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmg5\" (UniqueName: \"kubernetes.io/projected/b16711e0-4d1c-4545-8399-acbb7e248fe8-kube-api-access-bqmg5\") pod \"cert-manager-cainjector-cf98fcc89-km6xl\" (UID: \"b16711e0-4d1c-4545-8399-acbb7e248fe8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.363410 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-km6xl"] Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.374609 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6qgpr"] Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.456789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hml5f\" (UniqueName: \"kubernetes.io/projected/13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7-kube-api-access-hml5f\") pod \"cert-manager-858654f9db-6qgpr\" (UID: \"13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7\") " pod="cert-manager/cert-manager-858654f9db-6qgpr" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.456835 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqmg5\" (UniqueName: \"kubernetes.io/projected/b16711e0-4d1c-4545-8399-acbb7e248fe8-kube-api-access-bqmg5\") pod \"cert-manager-cainjector-cf98fcc89-km6xl\" (UID: \"b16711e0-4d1c-4545-8399-acbb7e248fe8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.456870 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmxl\" (UniqueName: \"kubernetes.io/projected/f46caff8-15ce-49be-97d0-08e60d937972-kube-api-access-ghmxl\") pod \"cert-manager-webhook-687f57d79b-4dcdp\" (UID: \"f46caff8-15ce-49be-97d0-08e60d937972\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.473636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hml5f\" (UniqueName: \"kubernetes.io/projected/13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7-kube-api-access-hml5f\") pod \"cert-manager-858654f9db-6qgpr\" (UID: \"13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7\") " pod="cert-manager/cert-manager-858654f9db-6qgpr" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.476192 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqmg5\" (UniqueName: \"kubernetes.io/projected/b16711e0-4d1c-4545-8399-acbb7e248fe8-kube-api-access-bqmg5\") pod \"cert-manager-cainjector-cf98fcc89-km6xl\" (UID: \"b16711e0-4d1c-4545-8399-acbb7e248fe8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.558301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmxl\" (UniqueName: \"kubernetes.io/projected/f46caff8-15ce-49be-97d0-08e60d937972-kube-api-access-ghmxl\") pod \"cert-manager-webhook-687f57d79b-4dcdp\" (UID: \"f46caff8-15ce-49be-97d0-08e60d937972\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.579079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmxl\" (UniqueName: \"kubernetes.io/projected/f46caff8-15ce-49be-97d0-08e60d937972-kube-api-access-ghmxl\") pod \"cert-manager-webhook-687f57d79b-4dcdp\" (UID: \"f46caff8-15ce-49be-97d0-08e60d937972\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.621440 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.629826 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6qgpr" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.654093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:14:46 crc kubenswrapper[4687]: I0312 16:14:46.950816 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4dcdp"] Mar 12 16:14:47 crc kubenswrapper[4687]: I0312 16:14:47.107613 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-km6xl"] Mar 12 16:14:47 crc kubenswrapper[4687]: W0312 16:14:47.110662 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb16711e0_4d1c_4545_8399_acbb7e248fe8.slice/crio-61d139d2da6edc97db940420e0032ed3515c9501b364fe9808a310e09dfff54c WatchSource:0}: Error finding container 61d139d2da6edc97db940420e0032ed3515c9501b364fe9808a310e09dfff54c: Status 404 returned error can't find the container with id 61d139d2da6edc97db940420e0032ed3515c9501b364fe9808a310e09dfff54c Mar 12 16:14:47 crc kubenswrapper[4687]: I0312 16:14:47.117298 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6qgpr"] Mar 12 16:14:47 crc kubenswrapper[4687]: W0312 16:14:47.124149 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13f9cddd_9bcc_4408_9cda_9dbb8f1f5cc7.slice/crio-ee274c5d06feac3d8d204d0639963ddffb5b252028ceae6f140d2bac2194ba1d WatchSource:0}: Error finding container ee274c5d06feac3d8d204d0639963ddffb5b252028ceae6f140d2bac2194ba1d: Status 404 returned error can't find the container with id ee274c5d06feac3d8d204d0639963ddffb5b252028ceae6f140d2bac2194ba1d Mar 12 16:14:47 crc kubenswrapper[4687]: I0312 16:14:47.635412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" event={"ID":"b16711e0-4d1c-4545-8399-acbb7e248fe8","Type":"ContainerStarted","Data":"61d139d2da6edc97db940420e0032ed3515c9501b364fe9808a310e09dfff54c"} Mar 12 16:14:47 crc kubenswrapper[4687]: I0312 16:14:47.636502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" event={"ID":"f46caff8-15ce-49be-97d0-08e60d937972","Type":"ContainerStarted","Data":"98ccb0c7b59482b4a5c9cddbb348732aed80af6af666b3968f49be1d8f2462c5"} Mar 12 16:14:47 crc kubenswrapper[4687]: I0312 16:14:47.637589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6qgpr" event={"ID":"13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7","Type":"ContainerStarted","Data":"ee274c5d06feac3d8d204d0639963ddffb5b252028ceae6f140d2bac2194ba1d"} Mar 12 16:14:50 crc kubenswrapper[4687]: I0312 16:14:50.021148 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:50 crc kubenswrapper[4687]: I0312 16:14:50.022320 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:14:51 crc kubenswrapper[4687]: I0312 16:14:51.068821 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zhmcz" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="registry-server" probeResult="failure" output=< Mar 12 16:14:51 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:14:51 crc kubenswrapper[4687]: > Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.681749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6qgpr" event={"ID":"13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7","Type":"ContainerStarted","Data":"b4be2283d690815dfd2c89a24d078f07dab26c897307078033b3c2de06fa1eb0"} Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.683116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" event={"ID":"b16711e0-4d1c-4545-8399-acbb7e248fe8","Type":"ContainerStarted","Data":"68d8aa82551f73d7b30ab2c52b12ffc1fefb90f33315bd6e3dae688004ef234f"} Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.684261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" event={"ID":"f46caff8-15ce-49be-97d0-08e60d937972","Type":"ContainerStarted","Data":"5a18ce2accd45a32bc896494e094b639c8ae276e92c26580ac1d1423f32280c5"} Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.684451 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.698027 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-6qgpr" podStartSLOduration=1.882566835 podStartE2EDuration="7.697986627s" podCreationTimestamp="2026-03-12 16:14:46 +0000 UTC" firstStartedPulling="2026-03-12 16:14:47.135479973 +0000 UTC m=+736.099442327" lastFinishedPulling="2026-03-12 16:14:52.950899775 +0000 UTC m=+741.914862119" observedRunningTime="2026-03-12 16:14:53.694234836 +0000 UTC m=+742.658197180" watchObservedRunningTime="2026-03-12 16:14:53.697986627 +0000 UTC m=+742.661948961" Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.711267 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" podStartSLOduration=1.7108988539999999 podStartE2EDuration="7.711252226s" podCreationTimestamp="2026-03-12 16:14:46 +0000 UTC" firstStartedPulling="2026-03-12 16:14:46.957806799 +0000 UTC m=+735.921769143" lastFinishedPulling="2026-03-12 16:14:52.958160161 +0000 UTC m=+741.922122515" observedRunningTime="2026-03-12 16:14:53.707402412 +0000 UTC m=+742.671364756" watchObservedRunningTime="2026-03-12 16:14:53.711252226 +0000 UTC m=+742.675214570" Mar 12 16:14:53 crc kubenswrapper[4687]: I0312 16:14:53.738094 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-km6xl" podStartSLOduration=1.833805457 podStartE2EDuration="7.738074111s" podCreationTimestamp="2026-03-12 16:14:46 +0000 UTC" firstStartedPulling="2026-03-12 16:14:47.112896152 +0000 UTC m=+736.076858516" lastFinishedPulling="2026-03-12 16:14:53.017164816 +0000 UTC m=+741.981127170" observedRunningTime="2026-03-12 16:14:53.734444883 +0000 UTC m=+742.698407227" watchObservedRunningTime="2026-03-12 16:14:53.738074111 +0000 UTC m=+742.702036465" Mar 12 16:14:57 crc kubenswrapper[4687]: I0312 16:14:57.131332 4687 scope.go:117] "RemoveContainer" containerID="e333aa997746b252319becf7b8923d4f519e3e34aa40e34198848218d38a9fd8" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.108638 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.148684 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j"] Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.150113 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.152916 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.153174 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.183574 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j"] Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.292228 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.293093 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-config-volume\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.293296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-secret-volume\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.293499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sm9z\" (UniqueName: \"kubernetes.io/projected/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-kube-api-access-9sm9z\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.358539 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhmcz"] Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.394834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-config-volume\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.394871 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-secret-volume\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.394913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sm9z\" (UniqueName: \"kubernetes.io/projected/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-kube-api-access-9sm9z\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.395684 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-config-volume\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.409502 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-secret-volume\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.412144 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sm9z\" (UniqueName: \"kubernetes.io/projected/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-kube-api-access-9sm9z\") pod \"collect-profiles-29555535-5nw4j\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.481251 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:00 crc kubenswrapper[4687]: I0312 16:15:00.887994 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j"] Mar 12 16:15:00 crc kubenswrapper[4687]: W0312 16:15:00.892054 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c9c53d5_0b9e_4f99_84d8_edfcea4675aa.slice/crio-a482894bd8b893a0a1bdbcfad5c212d3c218188386a65c34238f942f7fd980f0 WatchSource:0}: Error finding container a482894bd8b893a0a1bdbcfad5c212d3c218188386a65c34238f942f7fd980f0: Status 404 returned error can't find the container with id a482894bd8b893a0a1bdbcfad5c212d3c218188386a65c34238f942f7fd980f0 Mar 12 16:15:01 crc kubenswrapper[4687]: I0312 16:15:01.657624 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" Mar 12 16:15:01 crc kubenswrapper[4687]: I0312 16:15:01.760433 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" containerID="820440dbfa46ae90f64c8efbb408d225c5722bf3dfaf1a2e88180e1b34e6ea02" exitCode=0 Mar 12 16:15:01 crc kubenswrapper[4687]: I0312 16:15:01.760503 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" event={"ID":"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa","Type":"ContainerDied","Data":"820440dbfa46ae90f64c8efbb408d225c5722bf3dfaf1a2e88180e1b34e6ea02"} Mar 12 16:15:01 crc kubenswrapper[4687]: I0312 16:15:01.760542 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" event={"ID":"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa","Type":"ContainerStarted","Data":"a482894bd8b893a0a1bdbcfad5c212d3c218188386a65c34238f942f7fd980f0"} Mar 12 16:15:01 crc kubenswrapper[4687]: I0312 16:15:01.760646 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhmcz" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="registry-server" containerID="cri-o://1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65" gracePeriod=2 Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.128405 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.221767 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-utilities\") pod \"d33939dc-f684-44a8-a749-58790a1155d2\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.221816 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln2tf\" (UniqueName: \"kubernetes.io/projected/d33939dc-f684-44a8-a749-58790a1155d2-kube-api-access-ln2tf\") pod \"d33939dc-f684-44a8-a749-58790a1155d2\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.221874 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-catalog-content\") pod \"d33939dc-f684-44a8-a749-58790a1155d2\" (UID: \"d33939dc-f684-44a8-a749-58790a1155d2\") " Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.222533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-utilities" (OuterVolumeSpecName: "utilities") pod "d33939dc-f684-44a8-a749-58790a1155d2" (UID: "d33939dc-f684-44a8-a749-58790a1155d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.226614 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33939dc-f684-44a8-a749-58790a1155d2-kube-api-access-ln2tf" (OuterVolumeSpecName: "kube-api-access-ln2tf") pod "d33939dc-f684-44a8-a749-58790a1155d2" (UID: "d33939dc-f684-44a8-a749-58790a1155d2"). InnerVolumeSpecName "kube-api-access-ln2tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.323115 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln2tf\" (UniqueName: \"kubernetes.io/projected/d33939dc-f684-44a8-a749-58790a1155d2-kube-api-access-ln2tf\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.323165 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.338561 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d33939dc-f684-44a8-a749-58790a1155d2" (UID: "d33939dc-f684-44a8-a749-58790a1155d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.424758 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33939dc-f684-44a8-a749-58790a1155d2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.769508 4687 generic.go:334] "Generic (PLEG): container finished" podID="d33939dc-f684-44a8-a749-58790a1155d2" containerID="1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65" exitCode=0 Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.769544 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerDied","Data":"1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65"} Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.769586 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhmcz" event={"ID":"d33939dc-f684-44a8-a749-58790a1155d2","Type":"ContainerDied","Data":"390e8b3c37f44757039c3ac023b3580401fed3c6493af34752f9d30bed253fe8"} Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.769606 4687 scope.go:117] "RemoveContainer" containerID="1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.770454 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhmcz" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.800347 4687 scope.go:117] "RemoveContainer" containerID="459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.802468 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhmcz"] Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.808278 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhmcz"] Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.835664 4687 scope.go:117] "RemoveContainer" containerID="e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.857081 4687 scope.go:117] "RemoveContainer" containerID="1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65" Mar 12 16:15:02 crc kubenswrapper[4687]: E0312 16:15:02.857670 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65\": container with ID starting with 1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65 not found: ID does not exist" containerID="1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.857695 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65"} err="failed to get container status \"1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65\": rpc error: code = NotFound desc = could not find container \"1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65\": container with ID starting with 1e1d621f43b77e818d49caee21e4b697382f7ca8af964bcd8861a1d9cc0a1b65 not found: ID does not exist" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.857713 4687 scope.go:117] "RemoveContainer" containerID="459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd" Mar 12 16:15:02 crc kubenswrapper[4687]: E0312 16:15:02.857893 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd\": container with ID starting with 459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd not found: ID does not exist" containerID="459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.857906 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd"} err="failed to get container status \"459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd\": rpc error: code = NotFound desc = could not find container \"459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd\": container with ID starting with 459095b0de41a65475b8a1b7db70c28dd3d1d527bc52141f53087c145bc48dfd not found: ID does not exist" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.857917 4687 scope.go:117] "RemoveContainer" containerID="e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be" Mar 12 16:15:02 crc kubenswrapper[4687]: E0312 16:15:02.867755 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be\": container with ID starting with e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be not found: ID does not exist" containerID="e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.867779 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be"} err="failed to get container status \"e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be\": rpc error: code = NotFound desc = could not find container \"e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be\": container with ID starting with e03aaa33ef2252e26cab69abd1f037009b58c9f61b8da5d79a7e7ecc2233d6be not found: ID does not exist" Mar 12 16:15:02 crc kubenswrapper[4687]: I0312 16:15:02.997740 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.135687 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-secret-volume\") pod \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.135837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sm9z\" (UniqueName: \"kubernetes.io/projected/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-kube-api-access-9sm9z\") pod \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.135932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-config-volume\") pod \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\" (UID: \"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa\") " Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.136760 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" (UID: "6c9c53d5-0b9e-4f99-84d8-edfcea4675aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.140079 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-kube-api-access-9sm9z" (OuterVolumeSpecName: "kube-api-access-9sm9z") pod "6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" (UID: "6c9c53d5-0b9e-4f99-84d8-edfcea4675aa"). InnerVolumeSpecName "kube-api-access-9sm9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.150598 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" (UID: "6c9c53d5-0b9e-4f99-84d8-edfcea4675aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.237273 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.237313 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.237323 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sm9z\" (UniqueName: \"kubernetes.io/projected/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa-kube-api-access-9sm9z\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.741485 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33939dc-f684-44a8-a749-58790a1155d2" path="/var/lib/kubelet/pods/d33939dc-f684-44a8-a749-58790a1155d2/volumes" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.779417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" event={"ID":"6c9c53d5-0b9e-4f99-84d8-edfcea4675aa","Type":"ContainerDied","Data":"a482894bd8b893a0a1bdbcfad5c212d3c218188386a65c34238f942f7fd980f0"} Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.779476 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a482894bd8b893a0a1bdbcfad5c212d3c218188386a65c34238f942f7fd980f0" Mar 12 16:15:03 crc kubenswrapper[4687]: I0312 16:15:03.779498 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j" Mar 12 16:15:14 crc kubenswrapper[4687]: I0312 16:15:14.121933 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:15:14 crc kubenswrapper[4687]: I0312 16:15:14.122481 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.712834 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg"] Mar 12 16:15:28 crc kubenswrapper[4687]: E0312 16:15:28.714633 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="extract-content" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.714731 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="extract-content" Mar 12 16:15:28 crc kubenswrapper[4687]: E0312 16:15:28.714795 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="extract-utilities" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.714850 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="extract-utilities" Mar 12 16:15:28 crc kubenswrapper[4687]: E0312 16:15:28.714906 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="registry-server" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.714964 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="registry-server" Mar 12 16:15:28 crc kubenswrapper[4687]: E0312 16:15:28.715040 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" containerName="collect-profiles" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.715095 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" containerName="collect-profiles" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.715259 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33939dc-f684-44a8-a749-58790a1155d2" containerName="registry-server" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.715328 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" containerName="collect-profiles" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.716250 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.720482 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.723879 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg"] Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.823684 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2t8q\" (UniqueName: \"kubernetes.io/projected/7193d683-0bda-494e-87bd-79a506c1ec30-kube-api-access-t2t8q\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.823797 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.823884 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.925645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.925950 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2t8q\" (UniqueName: \"kubernetes.io/projected/7193d683-0bda-494e-87bd-79a506c1ec30-kube-api-access-t2t8q\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.926079 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.926155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.926395 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.965945 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt"] Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.966199 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2t8q\" (UniqueName: \"kubernetes.io/projected/7193d683-0bda-494e-87bd-79a506c1ec30-kube-api-access-t2t8q\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.967718 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:28 crc kubenswrapper[4687]: I0312 16:15:28.979447 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt"] Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.031907 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.129712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.129757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.129829 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v2q6\" (UniqueName: \"kubernetes.io/projected/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-kube-api-access-2v2q6\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.231377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.231701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.231791 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v2q6\" (UniqueName: \"kubernetes.io/projected/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-kube-api-access-2v2q6\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.232799 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.233109 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.253340 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg"] Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.258338 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v2q6\" (UniqueName: \"kubernetes.io/projected/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-kube-api-access-2v2q6\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.296015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.507232 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt"] Mar 12 16:15:29 crc kubenswrapper[4687]: W0312 16:15:29.511070 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ea334fc_44e4_4a2b_9470_e5fa2ae01911.slice/crio-0ac2c0e844f4ab4baa5931701d8e33b56cb5f3cd507953c67b649ff2a16c4689 WatchSource:0}: Error finding container 0ac2c0e844f4ab4baa5931701d8e33b56cb5f3cd507953c67b649ff2a16c4689: Status 404 returned error can't find the container with id 0ac2c0e844f4ab4baa5931701d8e33b56cb5f3cd507953c67b649ff2a16c4689 Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.969755 4687 generic.go:334] "Generic (PLEG): container finished" podID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerID="4b84451af92d67b0c619b028c7dad25fbaa87e5f1a7462e282614a04d3ce8e3c" exitCode=0 Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.969844 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" event={"ID":"5ea334fc-44e4-4a2b-9470-e5fa2ae01911","Type":"ContainerDied","Data":"4b84451af92d67b0c619b028c7dad25fbaa87e5f1a7462e282614a04d3ce8e3c"} Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.969875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" event={"ID":"5ea334fc-44e4-4a2b-9470-e5fa2ae01911","Type":"ContainerStarted","Data":"0ac2c0e844f4ab4baa5931701d8e33b56cb5f3cd507953c67b649ff2a16c4689"} Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.971883 4687 generic.go:334] "Generic (PLEG): container finished" podID="7193d683-0bda-494e-87bd-79a506c1ec30" containerID="113f32807f46607d45bb96e4cda67d2580c03994215a8509a863fb9acac16616" exitCode=0 Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.971913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" event={"ID":"7193d683-0bda-494e-87bd-79a506c1ec30","Type":"ContainerDied","Data":"113f32807f46607d45bb96e4cda67d2580c03994215a8509a863fb9acac16616"} Mar 12 16:15:29 crc kubenswrapper[4687]: I0312 16:15:29.971930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" event={"ID":"7193d683-0bda-494e-87bd-79a506c1ec30","Type":"ContainerStarted","Data":"0d200568adb772f87b3fdda952bd7ef335ebff6a279726604ee2de7632909792"} Mar 12 16:15:31 crc kubenswrapper[4687]: I0312 16:15:31.986432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" event={"ID":"5ea334fc-44e4-4a2b-9470-e5fa2ae01911","Type":"ContainerStarted","Data":"4d7cffbf4b7564d6582e25abb733d340e735297d60acb166416fefc790f18d31"} Mar 12 16:15:31 crc kubenswrapper[4687]: I0312 16:15:31.988473 4687 generic.go:334] "Generic (PLEG): container finished" podID="7193d683-0bda-494e-87bd-79a506c1ec30" containerID="95c34cc045979552a0a44da7840a58d05a0426a79d83b9d70585552bab025ba4" exitCode=0 Mar 12 16:15:31 crc kubenswrapper[4687]: I0312 16:15:31.988500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" event={"ID":"7193d683-0bda-494e-87bd-79a506c1ec30","Type":"ContainerDied","Data":"95c34cc045979552a0a44da7840a58d05a0426a79d83b9d70585552bab025ba4"} Mar 12 16:15:32 crc kubenswrapper[4687]: I0312 16:15:32.995333 4687 generic.go:334] "Generic (PLEG): container finished" podID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerID="4d7cffbf4b7564d6582e25abb733d340e735297d60acb166416fefc790f18d31" exitCode=0 Mar 12 16:15:32 crc kubenswrapper[4687]: I0312 16:15:32.995401 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" event={"ID":"5ea334fc-44e4-4a2b-9470-e5fa2ae01911","Type":"ContainerDied","Data":"4d7cffbf4b7564d6582e25abb733d340e735297d60acb166416fefc790f18d31"} Mar 12 16:15:32 crc kubenswrapper[4687]: I0312 16:15:32.997375 4687 generic.go:334] "Generic (PLEG): container finished" podID="7193d683-0bda-494e-87bd-79a506c1ec30" containerID="90613f6149ab2d5f55dea767b94e1daf74771b89139fde9cb518a7d817d619eb" exitCode=0 Mar 12 16:15:32 crc kubenswrapper[4687]: I0312 16:15:32.997401 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" event={"ID":"7193d683-0bda-494e-87bd-79a506c1ec30","Type":"ContainerDied","Data":"90613f6149ab2d5f55dea767b94e1daf74771b89139fde9cb518a7d817d619eb"} Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.009771 4687 generic.go:334] "Generic (PLEG): container finished" podID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerID="a2e37d7f66908b8cd2c44e038bf8e8f652ce2785eeff4afdf26f10453eb3acaa" exitCode=0 Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.009860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" event={"ID":"5ea334fc-44e4-4a2b-9470-e5fa2ae01911","Type":"ContainerDied","Data":"a2e37d7f66908b8cd2c44e038bf8e8f652ce2785eeff4afdf26f10453eb3acaa"} Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.254876 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.315711 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-util\") pod \"7193d683-0bda-494e-87bd-79a506c1ec30\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.316039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2t8q\" (UniqueName: \"kubernetes.io/projected/7193d683-0bda-494e-87bd-79a506c1ec30-kube-api-access-t2t8q\") pod \"7193d683-0bda-494e-87bd-79a506c1ec30\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.316082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-bundle\") pod \"7193d683-0bda-494e-87bd-79a506c1ec30\" (UID: \"7193d683-0bda-494e-87bd-79a506c1ec30\") " Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.317163 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-bundle" (OuterVolumeSpecName: "bundle") pod "7193d683-0bda-494e-87bd-79a506c1ec30" (UID: "7193d683-0bda-494e-87bd-79a506c1ec30"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.323947 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7193d683-0bda-494e-87bd-79a506c1ec30-kube-api-access-t2t8q" (OuterVolumeSpecName: "kube-api-access-t2t8q") pod "7193d683-0bda-494e-87bd-79a506c1ec30" (UID: "7193d683-0bda-494e-87bd-79a506c1ec30"). InnerVolumeSpecName "kube-api-access-t2t8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.343345 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-util" (OuterVolumeSpecName: "util") pod "7193d683-0bda-494e-87bd-79a506c1ec30" (UID: "7193d683-0bda-494e-87bd-79a506c1ec30"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.417947 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2t8q\" (UniqueName: \"kubernetes.io/projected/7193d683-0bda-494e-87bd-79a506c1ec30-kube-api-access-t2t8q\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.417984 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:34 crc kubenswrapper[4687]: I0312 16:15:34.417993 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7193d683-0bda-494e-87bd-79a506c1ec30-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.022330 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.023495 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg" event={"ID":"7193d683-0bda-494e-87bd-79a506c1ec30","Type":"ContainerDied","Data":"0d200568adb772f87b3fdda952bd7ef335ebff6a279726604ee2de7632909792"} Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.023848 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d200568adb772f87b3fdda952bd7ef335ebff6a279726604ee2de7632909792" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.267927 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.335974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-util\") pod \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.336135 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-bundle\") pod \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.336188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v2q6\" (UniqueName: \"kubernetes.io/projected/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-kube-api-access-2v2q6\") pod \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\" (UID: \"5ea334fc-44e4-4a2b-9470-e5fa2ae01911\") " Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.336959 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-bundle" (OuterVolumeSpecName: "bundle") pod "5ea334fc-44e4-4a2b-9470-e5fa2ae01911" (UID: "5ea334fc-44e4-4a2b-9470-e5fa2ae01911"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.341964 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-kube-api-access-2v2q6" (OuterVolumeSpecName: "kube-api-access-2v2q6") pod "5ea334fc-44e4-4a2b-9470-e5fa2ae01911" (UID: "5ea334fc-44e4-4a2b-9470-e5fa2ae01911"). InnerVolumeSpecName "kube-api-access-2v2q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.346784 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-util" (OuterVolumeSpecName: "util") pod "5ea334fc-44e4-4a2b-9470-e5fa2ae01911" (UID: "5ea334fc-44e4-4a2b-9470-e5fa2ae01911"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.438250 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.438614 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v2q6\" (UniqueName: \"kubernetes.io/projected/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-kube-api-access-2v2q6\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:35 crc kubenswrapper[4687]: I0312 16:15:35.438632 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ea334fc-44e4-4a2b-9470-e5fa2ae01911-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:15:36 crc kubenswrapper[4687]: I0312 16:15:36.030766 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" event={"ID":"5ea334fc-44e4-4a2b-9470-e5fa2ae01911","Type":"ContainerDied","Data":"0ac2c0e844f4ab4baa5931701d8e33b56cb5f3cd507953c67b649ff2a16c4689"} Mar 12 16:15:36 crc kubenswrapper[4687]: I0312 16:15:36.030819 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac2c0e844f4ab4baa5931701d8e33b56cb5f3cd507953c67b649ff2a16c4689" Mar 12 16:15:36 crc kubenswrapper[4687]: I0312 16:15:36.030822 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752173 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-qpjwx"] Mar 12 16:15:38 crc kubenswrapper[4687]: E0312 16:15:38.752663 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="util" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752675 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="util" Mar 12 16:15:38 crc kubenswrapper[4687]: E0312 16:15:38.752693 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="pull" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752699 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="pull" Mar 12 16:15:38 crc kubenswrapper[4687]: E0312 16:15:38.752708 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="pull" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752713 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="pull" Mar 12 16:15:38 crc kubenswrapper[4687]: E0312 16:15:38.752723 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="extract" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752729 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="extract" Mar 12 16:15:38 crc kubenswrapper[4687]: E0312 16:15:38.752737 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="extract" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752742 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="extract" Mar 12 16:15:38 crc kubenswrapper[4687]: E0312 16:15:38.752755 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="util" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752760 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="util" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752873 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7193d683-0bda-494e-87bd-79a506c1ec30" containerName="extract" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.752888 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea334fc-44e4-4a2b-9470-e5fa2ae01911" containerName="extract" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.753387 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.755278 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.755402 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.759556 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-4kn5b" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.769778 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-qpjwx"] Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.882093 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqscw\" (UniqueName: \"kubernetes.io/projected/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea-kube-api-access-bqscw\") pod \"cluster-logging-operator-c769fd969-qpjwx\" (UID: \"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea\") " pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.983002 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqscw\" (UniqueName: \"kubernetes.io/projected/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea-kube-api-access-bqscw\") pod \"cluster-logging-operator-c769fd969-qpjwx\" (UID: \"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea\") " pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:15:38 crc kubenswrapper[4687]: I0312 16:15:38.999224 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqscw\" (UniqueName: \"kubernetes.io/projected/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea-kube-api-access-bqscw\") pod \"cluster-logging-operator-c769fd969-qpjwx\" (UID: \"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea\") " pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:15:39 crc kubenswrapper[4687]: I0312 16:15:39.070453 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:15:39 crc kubenswrapper[4687]: I0312 16:15:39.457909 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-qpjwx"] Mar 12 16:15:39 crc kubenswrapper[4687]: W0312 16:15:39.465517 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cb4eb93_d3ba_469b_85fd_fe482c1b53ea.slice/crio-b01c87782964e1fd11d3a43ecdde967c10e17c114f92cb6348a476bac191c76f WatchSource:0}: Error finding container b01c87782964e1fd11d3a43ecdde967c10e17c114f92cb6348a476bac191c76f: Status 404 returned error can't find the container with id b01c87782964e1fd11d3a43ecdde967c10e17c114f92cb6348a476bac191c76f Mar 12 16:15:40 crc kubenswrapper[4687]: I0312 16:15:40.058087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" event={"ID":"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea","Type":"ContainerStarted","Data":"b01c87782964e1fd11d3a43ecdde967c10e17c114f92cb6348a476bac191c76f"} Mar 12 16:15:44 crc kubenswrapper[4687]: I0312 16:15:44.122050 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:15:44 crc kubenswrapper[4687]: I0312 16:15:44.122637 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:15:45 crc kubenswrapper[4687]: I0312 16:15:45.096287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" event={"ID":"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea","Type":"ContainerStarted","Data":"79342d023bb622e2a292e72cdb115a099cd4d60a12d93d470ca86ca3efb86c2f"} Mar 12 16:15:45 crc kubenswrapper[4687]: I0312 16:15:45.114348 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" podStartSLOduration=1.779063194 podStartE2EDuration="7.114330198s" podCreationTimestamp="2026-03-12 16:15:38 +0000 UTC" firstStartedPulling="2026-03-12 16:15:39.46785985 +0000 UTC m=+788.431822194" lastFinishedPulling="2026-03-12 16:15:44.803126854 +0000 UTC m=+793.767089198" observedRunningTime="2026-03-12 16:15:45.110582174 +0000 UTC m=+794.074544528" watchObservedRunningTime="2026-03-12 16:15:45.114330198 +0000 UTC m=+794.078292542" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.863563 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f"] Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.865326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.867300 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.867781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.871712 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.871900 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.872297 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.872713 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7tz2q" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.884043 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f"] Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.945616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-webhook-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.945898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdddq\" (UniqueName: \"kubernetes.io/projected/1206243a-8728-44ca-9858-672a27817783-kube-api-access-bdddq\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.945990 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.946103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1206243a-8728-44ca-9858-672a27817783-manager-config\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:49 crc kubenswrapper[4687]: I0312 16:15:49.946175 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-apiservice-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.047218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-webhook-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.047276 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdddq\" (UniqueName: \"kubernetes.io/projected/1206243a-8728-44ca-9858-672a27817783-kube-api-access-bdddq\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.047298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.047345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1206243a-8728-44ca-9858-672a27817783-manager-config\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.047379 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-apiservice-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.048995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1206243a-8728-44ca-9858-672a27817783-manager-config\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.052940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.064772 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-webhook-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.065166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-apiservice-cert\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.066005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdddq\" (UniqueName: \"kubernetes.io/projected/1206243a-8728-44ca-9858-672a27817783-kube-api-access-bdddq\") pod \"loki-operator-controller-manager-7f5dc69449-f5b8f\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.195096 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:15:50 crc kubenswrapper[4687]: I0312 16:15:50.661622 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f"] Mar 12 16:15:51 crc kubenswrapper[4687]: I0312 16:15:51.135205 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" event={"ID":"1206243a-8728-44ca-9858-672a27817783","Type":"ContainerStarted","Data":"7c8b8228c7a04c01cdf4016050131a8d663faed9255fa4b3ca8f3b80a4bd9388"} Mar 12 16:15:54 crc kubenswrapper[4687]: I0312 16:15:54.162434 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" event={"ID":"1206243a-8728-44ca-9858-672a27817783","Type":"ContainerStarted","Data":"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217"} Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.122893 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555536-zmn29"] Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.124350 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.127485 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.127681 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.127845 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.130787 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555536-zmn29"] Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.202288 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" event={"ID":"1206243a-8728-44ca-9858-672a27817783","Type":"ContainerStarted","Data":"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6"} Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.202606 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.206927 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.229286 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" podStartSLOduration=2.114854326 podStartE2EDuration="11.229267984s" podCreationTimestamp="2026-03-12 16:15:49 +0000 UTC" firstStartedPulling="2026-03-12 16:15:50.677412806 +0000 UTC m=+799.641375150" lastFinishedPulling="2026-03-12 16:15:59.791826464 +0000 UTC m=+808.755788808" observedRunningTime="2026-03-12 16:16:00.223966758 +0000 UTC m=+809.187929102" watchObservedRunningTime="2026-03-12 16:16:00.229267984 +0000 UTC m=+809.193230328" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.309103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbjq\" (UniqueName: \"kubernetes.io/projected/bddd0dc3-9173-46ed-90ec-1e905ec6e821-kube-api-access-mjbjq\") pod \"auto-csr-approver-29555536-zmn29\" (UID: \"bddd0dc3-9173-46ed-90ec-1e905ec6e821\") " pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.412160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbjq\" (UniqueName: \"kubernetes.io/projected/bddd0dc3-9173-46ed-90ec-1e905ec6e821-kube-api-access-mjbjq\") pod \"auto-csr-approver-29555536-zmn29\" (UID: \"bddd0dc3-9173-46ed-90ec-1e905ec6e821\") " pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.429334 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbjq\" (UniqueName: \"kubernetes.io/projected/bddd0dc3-9173-46ed-90ec-1e905ec6e821-kube-api-access-mjbjq\") pod \"auto-csr-approver-29555536-zmn29\" (UID: \"bddd0dc3-9173-46ed-90ec-1e905ec6e821\") " pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.438860 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:00 crc kubenswrapper[4687]: I0312 16:16:00.821254 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555536-zmn29"] Mar 12 16:16:00 crc kubenswrapper[4687]: W0312 16:16:00.824766 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbddd0dc3_9173_46ed_90ec_1e905ec6e821.slice/crio-d3ff6d6b587898ab757529e1018fb2e91feb28357a4352f196b8e6e395604d45 WatchSource:0}: Error finding container d3ff6d6b587898ab757529e1018fb2e91feb28357a4352f196b8e6e395604d45: Status 404 returned error can't find the container with id d3ff6d6b587898ab757529e1018fb2e91feb28357a4352f196b8e6e395604d45 Mar 12 16:16:01 crc kubenswrapper[4687]: I0312 16:16:01.208427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555536-zmn29" event={"ID":"bddd0dc3-9173-46ed-90ec-1e905ec6e821","Type":"ContainerStarted","Data":"d3ff6d6b587898ab757529e1018fb2e91feb28357a4352f196b8e6e395604d45"} Mar 12 16:16:02 crc kubenswrapper[4687]: I0312 16:16:02.215482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555536-zmn29" event={"ID":"bddd0dc3-9173-46ed-90ec-1e905ec6e821","Type":"ContainerStarted","Data":"6e8c67aeb86ec6a857badf690a65f0ea723851d6c36d019af73f80bbfc16ee1e"} Mar 12 16:16:02 crc kubenswrapper[4687]: I0312 16:16:02.236957 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555536-zmn29" podStartSLOduration=1.330647317 podStartE2EDuration="2.236940698s" podCreationTimestamp="2026-03-12 16:16:00 +0000 UTC" firstStartedPulling="2026-03-12 16:16:00.826289272 +0000 UTC m=+809.790251616" lastFinishedPulling="2026-03-12 16:16:01.732582653 +0000 UTC m=+810.696544997" observedRunningTime="2026-03-12 16:16:02.232657021 +0000 UTC m=+811.196619365" watchObservedRunningTime="2026-03-12 16:16:02.236940698 +0000 UTC m=+811.200903042" Mar 12 16:16:03 crc kubenswrapper[4687]: I0312 16:16:03.223487 4687 generic.go:334] "Generic (PLEG): container finished" podID="bddd0dc3-9173-46ed-90ec-1e905ec6e821" containerID="6e8c67aeb86ec6a857badf690a65f0ea723851d6c36d019af73f80bbfc16ee1e" exitCode=0 Mar 12 16:16:03 crc kubenswrapper[4687]: I0312 16:16:03.223542 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555536-zmn29" event={"ID":"bddd0dc3-9173-46ed-90ec-1e905ec6e821","Type":"ContainerDied","Data":"6e8c67aeb86ec6a857badf690a65f0ea723851d6c36d019af73f80bbfc16ee1e"} Mar 12 16:16:04 crc kubenswrapper[4687]: I0312 16:16:04.574982 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:04 crc kubenswrapper[4687]: I0312 16:16:04.683624 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjbjq\" (UniqueName: \"kubernetes.io/projected/bddd0dc3-9173-46ed-90ec-1e905ec6e821-kube-api-access-mjbjq\") pod \"bddd0dc3-9173-46ed-90ec-1e905ec6e821\" (UID: \"bddd0dc3-9173-46ed-90ec-1e905ec6e821\") " Mar 12 16:16:04 crc kubenswrapper[4687]: I0312 16:16:04.689671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddd0dc3-9173-46ed-90ec-1e905ec6e821-kube-api-access-mjbjq" (OuterVolumeSpecName: "kube-api-access-mjbjq") pod "bddd0dc3-9173-46ed-90ec-1e905ec6e821" (UID: "bddd0dc3-9173-46ed-90ec-1e905ec6e821"). InnerVolumeSpecName "kube-api-access-mjbjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:16:04 crc kubenswrapper[4687]: I0312 16:16:04.785588 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjbjq\" (UniqueName: \"kubernetes.io/projected/bddd0dc3-9173-46ed-90ec-1e905ec6e821-kube-api-access-mjbjq\") on node \"crc\" DevicePath \"\"" Mar 12 16:16:04 crc kubenswrapper[4687]: I0312 16:16:04.808229 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555530-sp779"] Mar 12 16:16:04 crc kubenswrapper[4687]: I0312 16:16:04.816058 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555530-sp779"] Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.246215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555536-zmn29" event={"ID":"bddd0dc3-9173-46ed-90ec-1e905ec6e821","Type":"ContainerDied","Data":"d3ff6d6b587898ab757529e1018fb2e91feb28357a4352f196b8e6e395604d45"} Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.246254 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ff6d6b587898ab757529e1018fb2e91feb28357a4352f196b8e6e395604d45" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.246307 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555536-zmn29" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.629335 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 12 16:16:05 crc kubenswrapper[4687]: E0312 16:16:05.629655 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddd0dc3-9173-46ed-90ec-1e905ec6e821" containerName="oc" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.629670 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddd0dc3-9173-46ed-90ec-1e905ec6e821" containerName="oc" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.629841 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddd0dc3-9173-46ed-90ec-1e905ec6e821" containerName="oc" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.630347 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.641557 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.641890 4687 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-lk2lz" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.642164 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.642493 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.742730 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f118be6d-858e-413e-a006-922ea753d28c" path="/var/lib/kubelet/pods/f118be6d-858e-413e-a006-922ea753d28c/volumes" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.799656 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957kp\" (UniqueName: \"kubernetes.io/projected/c07af8c1-6ecf-4380-9a9c-0a34dbc82e34-kube-api-access-957kp\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") " pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.799709 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") " pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.901508 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") " pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.901701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957kp\" (UniqueName: \"kubernetes.io/projected/c07af8c1-6ecf-4380-9a9c-0a34dbc82e34-kube-api-access-957kp\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") " pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.904155 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.904193 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d94f90cd8cd2f500dbef51cc7af9ae09a53ba4b1629cf26470bab72e6d706b5/globalmount\"" pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.921709 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957kp\" (UniqueName: \"kubernetes.io/projected/c07af8c1-6ecf-4380-9a9c-0a34dbc82e34-kube-api-access-957kp\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") " pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.933909 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a4c57b2d-8326-4a28-847f-3be73b2d4c8f\") pod \"minio\" (UID: \"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34\") " pod="minio-dev/minio" Mar 12 16:16:05 crc kubenswrapper[4687]: I0312 16:16:05.949711 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 12 16:16:06 crc kubenswrapper[4687]: I0312 16:16:06.150024 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 12 16:16:06 crc kubenswrapper[4687]: I0312 16:16:06.253403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34","Type":"ContainerStarted","Data":"6035ff397e2a6b3f9887caf7d6f2c34e1c0575cc70625f1ef3d84371fd940180"} Mar 12 16:16:10 crc kubenswrapper[4687]: I0312 16:16:10.285833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c07af8c1-6ecf-4380-9a9c-0a34dbc82e34","Type":"ContainerStarted","Data":"13d4d77fd5f84cb6e6c542defe983dda607fef59ac9a2b98afacf58ce001711a"} Mar 12 16:16:10 crc kubenswrapper[4687]: I0312 16:16:10.304160 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.048809213 podStartE2EDuration="7.304143299s" podCreationTimestamp="2026-03-12 16:16:03 +0000 UTC" firstStartedPulling="2026-03-12 16:16:06.158316307 +0000 UTC m=+815.122278661" lastFinishedPulling="2026-03-12 16:16:09.413650393 +0000 UTC m=+818.377612747" observedRunningTime="2026-03-12 16:16:10.300342675 +0000 UTC m=+819.264305029" watchObservedRunningTime="2026-03-12 16:16:10.304143299 +0000 UTC m=+819.268105643" Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.121561 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.121902 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.121949 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.122448 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e641712db1a1962856f1bc8834ef5662c58059d18ba81c47f6d78c93a0e3f14"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.122509 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://8e641712db1a1962856f1bc8834ef5662c58059d18ba81c47f6d78c93a0e3f14" gracePeriod=600 Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.311122 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="8e641712db1a1962856f1bc8834ef5662c58059d18ba81c47f6d78c93a0e3f14" exitCode=0 Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.311261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"8e641712db1a1962856f1bc8834ef5662c58059d18ba81c47f6d78c93a0e3f14"} Mar 12 16:16:14 crc kubenswrapper[4687]: I0312 16:16:14.311413 4687 scope.go:117] "RemoveContainer" containerID="b9433415f064106c8d16d6be7a69dee0d049ec5465c0e65f1afbe31077aa75da" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.137661 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.138881 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.144580 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.144711 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-mfgl7" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.144718 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.144814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.144716 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.153586 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.246076 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.246135 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnjz9\" (UniqueName: \"kubernetes.io/projected/d5585f46-790b-4a19-b276-9c185d49e5fb-kube-api-access-rnjz9\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.246208 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-config\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.246274 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.246311 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.327735 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-blj52"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.329000 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.329769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"dd9807f39bd0fa78ee3cf947dc53d693272e6c942fe244d8ebc4e3c782d477d9"} Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.331173 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.331398 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.331662 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.345863 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-blj52"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.347884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.347952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.348046 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.348099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnjz9\" (UniqueName: \"kubernetes.io/projected/d5585f46-790b-4a19-b276-9c185d49e5fb-kube-api-access-rnjz9\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.348150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-config\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.350140 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-config\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.352193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.358426 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.372991 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.396750 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.398198 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.400576 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.400775 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.407245 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnjz9\" (UniqueName: \"kubernetes.io/projected/d5585f46-790b-4a19-b276-9c185d49e5fb-kube-api-access-rnjz9\") pod \"logging-loki-distributor-5d5548c9f5-mm5q2\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.421349 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.451127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-config\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.451254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.451293 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.451350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.451400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2s2\" (UniqueName: \"kubernetes.io/projected/acd169c8-4496-4d03-a06b-de5cd486e7ed-kube-api-access-8b2s2\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.451456 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.460547 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553246 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553450 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2s2\" (UniqueName: \"kubernetes.io/projected/acd169c8-4496-4d03-a06b-de5cd486e7ed-kube-api-access-8b2s2\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-config\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-config\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553636 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hg7\" (UniqueName: \"kubernetes.io/projected/9a12135c-5a4e-490c-9a7a-2e741084697d-kube-api-access-h8hg7\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.553711 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.555062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.557037 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-9cpdv"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.557524 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-config\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.557755 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.558261 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.559440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.589162 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.589964 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.590057 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.589995 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.590320 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.598705 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.635610 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2s2\" (UniqueName: \"kubernetes.io/projected/acd169c8-4496-4d03-a06b-de5cd486e7ed-kube-api-access-8b2s2\") pod \"logging-loki-querier-76bf7b6d45-blj52\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.638650 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-9cpdv"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.644588 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-t74nx"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.645965 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.649459 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-74hr8" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.649631 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-t74nx"] Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.651055 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tenants\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-config\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654855 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-rbac\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654871 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654890 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654972 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-lokistack-gateway\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.654991 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8g2c\" (UniqueName: \"kubernetes.io/projected/14c9be95-e47d-4358-9b75-ab8188aeff38-kube-api-access-x8g2c\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.655008 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.655035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hg7\" (UniqueName: \"kubernetes.io/projected/9a12135c-5a4e-490c-9a7a-2e741084697d-kube-api-access-h8hg7\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.655749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.656088 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-config\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.661685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.671076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.674622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hg7\" (UniqueName: \"kubernetes.io/projected/9a12135c-5a4e-490c-9a7a-2e741084697d-kube-api-access-h8hg7\") pod \"logging-loki-query-frontend-6d6859c548-7lkk5\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.745667 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-rbac\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756173 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqmr\" (UniqueName: \"kubernetes.io/projected/0b4762e3-e056-423a-b130-487353e771ed-kube-api-access-vwqmr\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756401 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-rbac\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756465 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tenants\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-lokistack-gateway\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-lokistack-gateway\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756620 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8g2c\" (UniqueName: \"kubernetes.io/projected/14c9be95-e47d-4358-9b75-ab8188aeff38-kube-api-access-x8g2c\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tenants\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.756700 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: E0312 16:16:15.757224 4687 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 12 16:16:15 crc kubenswrapper[4687]: E0312 16:16:15.757288 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret podName:14c9be95-e47d-4358-9b75-ab8188aeff38 nodeName:}" failed. No retries permitted until 2026-03-12 16:16:16.257267929 +0000 UTC m=+825.221230353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret") pod "logging-loki-gateway-7847845898-9cpdv" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38") : secret "logging-loki-gateway-http" not found Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.757411 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-rbac\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.757567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.757606 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.758138 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-lokistack-gateway\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.765064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tenants\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.765349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.785503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8g2c\" (UniqueName: \"kubernetes.io/projected/14c9be95-e47d-4358-9b75-ab8188aeff38-kube-api-access-x8g2c\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.860623 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.860938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqmr\" (UniqueName: \"kubernetes.io/projected/0b4762e3-e056-423a-b130-487353e771ed-kube-api-access-vwqmr\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.860963 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.860984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.861012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tenants\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.861062 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-lokistack-gateway\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.861100 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.861119 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-rbac\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: E0312 16:16:15.861547 4687 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 12 16:16:15 crc kubenswrapper[4687]: E0312 16:16:15.861611 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret podName:0b4762e3-e056-423a-b130-487353e771ed nodeName:}" failed. No retries permitted until 2026-03-12 16:16:16.361593393 +0000 UTC m=+825.325555737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret") pod "logging-loki-gateway-7847845898-t74nx" (UID: "0b4762e3-e056-423a-b130-487353e771ed") : secret "logging-loki-gateway-http" not found Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.861893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.862067 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-rbac\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.862405 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-lokistack-gateway\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.862501 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.868492 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.869108 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tenants\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.889561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqmr\" (UniqueName: \"kubernetes.io/projected/0b4762e3-e056-423a-b130-487353e771ed-kube-api-access-vwqmr\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:15 crc kubenswrapper[4687]: W0312 16:16:15.935342 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacd169c8_4496_4d03_a06b_de5cd486e7ed.slice/crio-d401f37bd3b1a7a17070986d04149ca37487609744328869732032411c1fdf8f WatchSource:0}: Error finding container d401f37bd3b1a7a17070986d04149ca37487609744328869732032411c1fdf8f: Status 404 returned error can't find the container with id d401f37bd3b1a7a17070986d04149ca37487609744328869732032411c1fdf8f Mar 12 16:16:15 crc kubenswrapper[4687]: I0312 16:16:15.952810 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-blj52"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.003510 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.014712 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.267968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.270944 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret\") pod \"logging-loki-gateway-7847845898-9cpdv\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.319722 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.320651 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.322498 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.323007 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.330469 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.342528 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" event={"ID":"9a12135c-5a4e-490c-9a7a-2e741084697d","Type":"ContainerStarted","Data":"eae06805fc709d7593dd87080906b60715c5ebeb5f1c42f5ee1b4ee57d24527a"} Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.343650 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" event={"ID":"d5585f46-790b-4a19-b276-9c185d49e5fb","Type":"ContainerStarted","Data":"56dfa5b6267df4c585f8096b7773abd7afecafc226b76e822be672b486103263"} Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.345463 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" event={"ID":"acd169c8-4496-4d03-a06b-de5cd486e7ed","Type":"ContainerStarted","Data":"d401f37bd3b1a7a17070986d04149ca37487609744328869732032411c1fdf8f"} Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.363118 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.364621 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.367675 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.367700 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.369042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.374104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret\") pod \"logging-loki-gateway-7847845898-t74nx\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.376951 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.471207 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.471371 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.471912 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.471960 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.471991 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472309 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlp2\" (UniqueName: \"kubernetes.io/projected/c064f367-224d-4c76-b932-4dcd72d61b85-kube-api-access-6qlp2\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472334 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472419 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-config\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-config\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.472792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.473114 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.473294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f544\" (UniqueName: \"kubernetes.io/projected/903e7127-e866-4a6f-b55c-d5efa486ed88-kube-api-access-4f544\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.484239 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.485334 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.489483 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.489551 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.499782 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.521921 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575044 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575113 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575215 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7kk\" (UniqueName: \"kubernetes.io/projected/5988e7b5-79d2-42db-aa38-0ba4224cac87-kube-api-access-vl7kk\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f544\" (UniqueName: \"kubernetes.io/projected/903e7127-e866-4a6f-b55c-d5efa486ed88-kube-api-access-4f544\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575442 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.575692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576122 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576456 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qlp2\" (UniqueName: \"kubernetes.io/projected/c064f367-224d-4c76-b932-4dcd72d61b85-kube-api-access-6qlp2\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576478 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-config\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-config\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576564 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576637 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576662 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576714 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576801 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.576823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.578328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.579370 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.579445 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e112b0e29892c9082e00bb3a09468a76dfb49ba96d4032021a89591e57977612/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.579696 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.580183 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.580213 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10e4a4756392c350b6812219504826b8241bb985c5813d60bce0f155df6e09ec/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.580561 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.580584 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9bffae0546d96168a8a4c72635781c695e8137ed3a747b95302275473e78a4e9/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.580878 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-config\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.582075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.583064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.585861 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.588594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-config\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.592681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.595716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f544\" (UniqueName: \"kubernetes.io/projected/903e7127-e866-4a6f-b55c-d5efa486ed88-kube-api-access-4f544\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.595899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.598822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qlp2\" (UniqueName: \"kubernetes.io/projected/c064f367-224d-4c76-b932-4dcd72d61b85-kube-api-access-6qlp2\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.617329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.629840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.640166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.642650 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.683826 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.683884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.683915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.683950 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.683983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.684023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7kk\" (UniqueName: \"kubernetes.io/projected/5988e7b5-79d2-42db-aa38-0ba4224cac87-kube-api-access-vl7kk\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.684063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.687675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.688105 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.691587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.692303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.692781 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.700008 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.700043 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94a21af1b3ca60f5b53cf1801d45610b9f1530d7b95a3e9060844383772a149d/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.719143 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7kk\" (UniqueName: \"kubernetes.io/projected/5988e7b5-79d2-42db-aa38-0ba4224cac87-kube-api-access-vl7kk\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.749489 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.784551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.807937 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:16 crc kubenswrapper[4687]: I0312 16:16:16.882684 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-9cpdv"] Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.195134 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-t74nx"] Mar 12 16:16:17 crc kubenswrapper[4687]: W0312 16:16:17.205797 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b4762e3_e056_423a_b130_487353e771ed.slice/crio-10a4688857d37f962fd538f1657aa44cd5faef09cd452093c1f0dd6e25ae5811 WatchSource:0}: Error finding container 10a4688857d37f962fd538f1657aa44cd5faef09cd452093c1f0dd6e25ae5811: Status 404 returned error can't find the container with id 10a4688857d37f962fd538f1657aa44cd5faef09cd452093c1f0dd6e25ae5811 Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.252516 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.299822 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:16:17 crc kubenswrapper[4687]: W0312 16:16:17.311225 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903e7127_e866_4a6f_b55c_d5efa486ed88.slice/crio-a94c1f05f9c4c6897f7504c3168da5e9de5663a3caa06ee5bec79cf9f145f171 WatchSource:0}: Error finding container a94c1f05f9c4c6897f7504c3168da5e9de5663a3caa06ee5bec79cf9f145f171: Status 404 returned error can't find the container with id a94c1f05f9c4c6897f7504c3168da5e9de5663a3caa06ee5bec79cf9f145f171 Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.356500 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"c064f367-224d-4c76-b932-4dcd72d61b85","Type":"ContainerStarted","Data":"e1b69bc34fcef8fd8edebc64e1e142640c26c4477ff7ef39b8c20e5ab3f40619"} Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.361933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" event={"ID":"0b4762e3-e056-423a-b130-487353e771ed","Type":"ContainerStarted","Data":"10a4688857d37f962fd538f1657aa44cd5faef09cd452093c1f0dd6e25ae5811"} Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.363581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" event={"ID":"14c9be95-e47d-4358-9b75-ab8188aeff38","Type":"ContainerStarted","Data":"575e62c7a4e167f7bc03d235bafb9c928c5a8a3bfcf462f4fbab85f64fa798a1"} Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.375389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"903e7127-e866-4a6f-b55c-d5efa486ed88","Type":"ContainerStarted","Data":"a94c1f05f9c4c6897f7504c3168da5e9de5663a3caa06ee5bec79cf9f145f171"} Mar 12 16:16:17 crc kubenswrapper[4687]: I0312 16:16:17.393862 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:16:17 crc kubenswrapper[4687]: W0312 16:16:17.410412 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5988e7b5_79d2_42db_aa38_0ba4224cac87.slice/crio-2d064957803bea20a7b10d6c8d8072e850ae65f348b8d5d91d5583df8b60a2e2 WatchSource:0}: Error finding container 2d064957803bea20a7b10d6c8d8072e850ae65f348b8d5d91d5583df8b60a2e2: Status 404 returned error can't find the container with id 2d064957803bea20a7b10d6c8d8072e850ae65f348b8d5d91d5583df8b60a2e2 Mar 12 16:16:18 crc kubenswrapper[4687]: I0312 16:16:18.385064 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5988e7b5-79d2-42db-aa38-0ba4224cac87","Type":"ContainerStarted","Data":"2d064957803bea20a7b10d6c8d8072e850ae65f348b8d5d91d5583df8b60a2e2"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.413303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" event={"ID":"9a12135c-5a4e-490c-9a7a-2e741084697d","Type":"ContainerStarted","Data":"7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.413889 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.419535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"903e7127-e866-4a6f-b55c-d5efa486ed88","Type":"ContainerStarted","Data":"21c54ee8fa5a1723869ed6bd1240c2446db2a05934d7e0f6a51b4fb501e35f88"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.419687 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.421979 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"c064f367-224d-4c76-b932-4dcd72d61b85","Type":"ContainerStarted","Data":"b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.422084 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.423856 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" event={"ID":"d5585f46-790b-4a19-b276-9c185d49e5fb","Type":"ContainerStarted","Data":"541031a8847e15f19cb40a3dafd43ee51a05693a4687976728feaabd4b0eabd1"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.423989 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.425840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" event={"ID":"acd169c8-4496-4d03-a06b-de5cd486e7ed","Type":"ContainerStarted","Data":"5b0090e754e862e3c3a96c9e0911ab964477142974a0441f6626a34602552301"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.426546 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.432351 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5988e7b5-79d2-42db-aa38-0ba4224cac87","Type":"ContainerStarted","Data":"31a92a22b1d3049f36d2932f00ae91d186801ebf59c384e2d4af16f88ec29366"} Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.432501 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.454232 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" podStartSLOduration=1.662377006 podStartE2EDuration="5.45420788s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:16.023188849 +0000 UTC m=+824.987151193" lastFinishedPulling="2026-03-12 16:16:19.815019723 +0000 UTC m=+828.778982067" observedRunningTime="2026-03-12 16:16:20.437477871 +0000 UTC m=+829.401440235" watchObservedRunningTime="2026-03-12 16:16:20.45420788 +0000 UTC m=+829.418170224" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.462515 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.95857727 podStartE2EDuration="5.462491998s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:17.412762186 +0000 UTC m=+826.376724530" lastFinishedPulling="2026-03-12 16:16:19.916676914 +0000 UTC m=+828.880639258" observedRunningTime="2026-03-12 16:16:20.455994799 +0000 UTC m=+829.419957143" watchObservedRunningTime="2026-03-12 16:16:20.462491998 +0000 UTC m=+829.426454342" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.487036 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.8269434159999998 podStartE2EDuration="5.48701647s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:17.256959769 +0000 UTC m=+826.220922113" lastFinishedPulling="2026-03-12 16:16:19.917032823 +0000 UTC m=+828.880995167" observedRunningTime="2026-03-12 16:16:20.477826479 +0000 UTC m=+829.441788823" watchObservedRunningTime="2026-03-12 16:16:20.48701647 +0000 UTC m=+829.450978814" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.505746 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.899763766 podStartE2EDuration="5.505727705s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:17.314751426 +0000 UTC m=+826.278713780" lastFinishedPulling="2026-03-12 16:16:19.920715375 +0000 UTC m=+828.884677719" observedRunningTime="2026-03-12 16:16:20.5041164 +0000 UTC m=+829.468078764" watchObservedRunningTime="2026-03-12 16:16:20.505727705 +0000 UTC m=+829.469690059" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.531707 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" podStartSLOduration=1.694522408 podStartE2EDuration="5.531685787s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:16.019528008 +0000 UTC m=+824.983490352" lastFinishedPulling="2026-03-12 16:16:19.856691387 +0000 UTC m=+828.820653731" observedRunningTime="2026-03-12 16:16:20.527532383 +0000 UTC m=+829.491494727" watchObservedRunningTime="2026-03-12 16:16:20.531685787 +0000 UTC m=+829.495648131" Mar 12 16:16:20 crc kubenswrapper[4687]: I0312 16:16:20.587513 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" podStartSLOduration=1.703460324 podStartE2EDuration="5.587490929s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:15.946845084 +0000 UTC m=+824.910807428" lastFinishedPulling="2026-03-12 16:16:19.830875689 +0000 UTC m=+828.794838033" observedRunningTime="2026-03-12 16:16:20.575260963 +0000 UTC m=+829.539223307" watchObservedRunningTime="2026-03-12 16:16:20.587490929 +0000 UTC m=+829.551453273" Mar 12 16:16:22 crc kubenswrapper[4687]: I0312 16:16:22.456704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" event={"ID":"0b4762e3-e056-423a-b130-487353e771ed","Type":"ContainerStarted","Data":"46922afdc067616b14f9e4d1a062ba92d26923e76bf3fb7dc392a5e3f33e8491"} Mar 12 16:16:22 crc kubenswrapper[4687]: I0312 16:16:22.458587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" event={"ID":"14c9be95-e47d-4358-9b75-ab8188aeff38","Type":"ContainerStarted","Data":"e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce"} Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.472393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" event={"ID":"0b4762e3-e056-423a-b130-487353e771ed","Type":"ContainerStarted","Data":"ecb4d9ff91e7fc2a4750decb4519b998ca16fc9f1e60617799f11234a403ccf5"} Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.473880 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.473906 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.475646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" event={"ID":"14c9be95-e47d-4358-9b75-ab8188aeff38","Type":"ContainerStarted","Data":"d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836"} Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.475912 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.485101 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.485278 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.490427 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:16:24 crc kubenswrapper[4687]: I0312 16:16:24.501239 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" podStartSLOduration=3.305572795 podStartE2EDuration="9.501218508s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:17.208394956 +0000 UTC m=+826.172357300" lastFinishedPulling="2026-03-12 16:16:23.404040669 +0000 UTC m=+832.368003013" observedRunningTime="2026-03-12 16:16:24.495101121 +0000 UTC m=+833.459063465" watchObservedRunningTime="2026-03-12 16:16:24.501218508 +0000 UTC m=+833.465180842" Mar 12 16:16:25 crc kubenswrapper[4687]: I0312 16:16:25.483066 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:25 crc kubenswrapper[4687]: I0312 16:16:25.491037 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:16:25 crc kubenswrapper[4687]: I0312 16:16:25.511950 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" podStartSLOduration=4.009928231 podStartE2EDuration="10.511928235s" podCreationTimestamp="2026-03-12 16:16:15 +0000 UTC" firstStartedPulling="2026-03-12 16:16:16.909722126 +0000 UTC m=+825.873684470" lastFinishedPulling="2026-03-12 16:16:23.41172213 +0000 UTC m=+832.375684474" observedRunningTime="2026-03-12 16:16:24.550721218 +0000 UTC m=+833.514683562" watchObservedRunningTime="2026-03-12 16:16:25.511928235 +0000 UTC m=+834.475890579" Mar 12 16:16:35 crc kubenswrapper[4687]: I0312 16:16:35.466631 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:16:35 crc kubenswrapper[4687]: I0312 16:16:35.660023 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:16:35 crc kubenswrapper[4687]: I0312 16:16:35.762073 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.478720 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x2c58"] Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.481407 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.492546 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2c58"] Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.649618 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.649671 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.661973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-catalog-content\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.662024 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlr6\" (UniqueName: \"kubernetes.io/projected/c3fad218-a4aa-4977-94cc-7e643073165b-kube-api-access-5jlr6\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.662094 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-utilities\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.757057 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.762904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-catalog-content\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.762941 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jlr6\" (UniqueName: \"kubernetes.io/projected/c3fad218-a4aa-4977-94cc-7e643073165b-kube-api-access-5jlr6\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.763005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-utilities\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.763502 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-catalog-content\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.763527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-utilities\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.783778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jlr6\" (UniqueName: \"kubernetes.io/projected/c3fad218-a4aa-4977-94cc-7e643073165b-kube-api-access-5jlr6\") pod \"certified-operators-x2c58\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.844350 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:36 crc kubenswrapper[4687]: I0312 16:16:36.854244 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:16:37 crc kubenswrapper[4687]: I0312 16:16:37.350987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2c58"] Mar 12 16:16:37 crc kubenswrapper[4687]: W0312 16:16:37.355448 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fad218_a4aa_4977_94cc_7e643073165b.slice/crio-d35576f66b61370c11295aa56cc1cf33e2ee515d61c20d2b10fc2927c52ba6fe WatchSource:0}: Error finding container d35576f66b61370c11295aa56cc1cf33e2ee515d61c20d2b10fc2927c52ba6fe: Status 404 returned error can't find the container with id d35576f66b61370c11295aa56cc1cf33e2ee515d61c20d2b10fc2927c52ba6fe Mar 12 16:16:37 crc kubenswrapper[4687]: I0312 16:16:37.558165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2c58" event={"ID":"c3fad218-a4aa-4977-94cc-7e643073165b","Type":"ContainerStarted","Data":"d35576f66b61370c11295aa56cc1cf33e2ee515d61c20d2b10fc2927c52ba6fe"} Mar 12 16:16:38 crc kubenswrapper[4687]: I0312 16:16:38.568986 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3fad218-a4aa-4977-94cc-7e643073165b" containerID="889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3" exitCode=0 Mar 12 16:16:38 crc kubenswrapper[4687]: I0312 16:16:38.569061 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2c58" event={"ID":"c3fad218-a4aa-4977-94cc-7e643073165b","Type":"ContainerDied","Data":"889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3"} Mar 12 16:16:41 crc kubenswrapper[4687]: I0312 16:16:41.587896 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3fad218-a4aa-4977-94cc-7e643073165b" containerID="270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613" exitCode=0 Mar 12 16:16:41 crc kubenswrapper[4687]: I0312 16:16:41.587945 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2c58" event={"ID":"c3fad218-a4aa-4977-94cc-7e643073165b","Type":"ContainerDied","Data":"270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613"} Mar 12 16:16:42 crc kubenswrapper[4687]: I0312 16:16:42.597984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2c58" event={"ID":"c3fad218-a4aa-4977-94cc-7e643073165b","Type":"ContainerStarted","Data":"09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19"} Mar 12 16:16:42 crc kubenswrapper[4687]: I0312 16:16:42.618059 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x2c58" podStartSLOduration=3.102121273 podStartE2EDuration="6.618041466s" podCreationTimestamp="2026-03-12 16:16:36 +0000 UTC" firstStartedPulling="2026-03-12 16:16:38.571008224 +0000 UTC m=+847.534970588" lastFinishedPulling="2026-03-12 16:16:42.086928437 +0000 UTC m=+851.050890781" observedRunningTime="2026-03-12 16:16:42.617749329 +0000 UTC m=+851.581711673" watchObservedRunningTime="2026-03-12 16:16:42.618041466 +0000 UTC m=+851.582003830" Mar 12 16:16:46 crc kubenswrapper[4687]: I0312 16:16:46.649825 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 12 16:16:46 crc kubenswrapper[4687]: I0312 16:16:46.650490 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:16:46 crc kubenswrapper[4687]: I0312 16:16:46.844756 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:46 crc kubenswrapper[4687]: I0312 16:16:46.844996 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:46 crc kubenswrapper[4687]: I0312 16:16:46.883830 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:47 crc kubenswrapper[4687]: I0312 16:16:47.676141 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:47 crc kubenswrapper[4687]: I0312 16:16:47.720073 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2c58"] Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.524820 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6dfp"] Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.526552 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.546162 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6dfp"] Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.648501 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x2c58" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="registry-server" containerID="cri-o://09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19" gracePeriod=2 Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.674543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-catalog-content\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.674600 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-utilities\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.674766 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6vmf\" (UniqueName: \"kubernetes.io/projected/a5de4888-1093-45b1-9745-97df7aef8cd7-kube-api-access-x6vmf\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.776306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-utilities\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.776678 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6vmf\" (UniqueName: \"kubernetes.io/projected/a5de4888-1093-45b1-9745-97df7aef8cd7-kube-api-access-x6vmf\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.776781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-catalog-content\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.777861 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-catalog-content\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.778120 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-utilities\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.800481 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6vmf\" (UniqueName: \"kubernetes.io/projected/a5de4888-1093-45b1-9745-97df7aef8cd7-kube-api-access-x6vmf\") pod \"community-operators-g6dfp\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:49 crc kubenswrapper[4687]: I0312 16:16:49.847118 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.127926 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.202668 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6dfp"] Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.285284 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-utilities\") pod \"c3fad218-a4aa-4977-94cc-7e643073165b\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.285403 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-catalog-content\") pod \"c3fad218-a4aa-4977-94cc-7e643073165b\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.285511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jlr6\" (UniqueName: \"kubernetes.io/projected/c3fad218-a4aa-4977-94cc-7e643073165b-kube-api-access-5jlr6\") pod \"c3fad218-a4aa-4977-94cc-7e643073165b\" (UID: \"c3fad218-a4aa-4977-94cc-7e643073165b\") " Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.287661 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-utilities" (OuterVolumeSpecName: "utilities") pod "c3fad218-a4aa-4977-94cc-7e643073165b" (UID: "c3fad218-a4aa-4977-94cc-7e643073165b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.291736 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3fad218-a4aa-4977-94cc-7e643073165b-kube-api-access-5jlr6" (OuterVolumeSpecName: "kube-api-access-5jlr6") pod "c3fad218-a4aa-4977-94cc-7e643073165b" (UID: "c3fad218-a4aa-4977-94cc-7e643073165b"). InnerVolumeSpecName "kube-api-access-5jlr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.350037 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3fad218-a4aa-4977-94cc-7e643073165b" (UID: "c3fad218-a4aa-4977-94cc-7e643073165b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.387175 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.387220 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3fad218-a4aa-4977-94cc-7e643073165b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.387237 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jlr6\" (UniqueName: \"kubernetes.io/projected/c3fad218-a4aa-4977-94cc-7e643073165b-kube-api-access-5jlr6\") on node \"crc\" DevicePath \"\"" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.658407 4687 generic.go:334] "Generic (PLEG): container finished" podID="c3fad218-a4aa-4977-94cc-7e643073165b" containerID="09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19" exitCode=0 Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.658664 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2c58" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.658589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2c58" event={"ID":"c3fad218-a4aa-4977-94cc-7e643073165b","Type":"ContainerDied","Data":"09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19"} Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.658703 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2c58" event={"ID":"c3fad218-a4aa-4977-94cc-7e643073165b","Type":"ContainerDied","Data":"d35576f66b61370c11295aa56cc1cf33e2ee515d61c20d2b10fc2927c52ba6fe"} Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.658720 4687 scope.go:117] "RemoveContainer" containerID="09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.660057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6dfp" event={"ID":"a5de4888-1093-45b1-9745-97df7aef8cd7","Type":"ContainerStarted","Data":"12e9f39c897adf251b85f2cb01001c98308d02022f8e90f14478f5da586de6ef"} Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.693165 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2c58"] Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.696224 4687 scope.go:117] "RemoveContainer" containerID="270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.704099 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x2c58"] Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.719670 4687 scope.go:117] "RemoveContainer" containerID="889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.736043 4687 scope.go:117] "RemoveContainer" containerID="09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19" Mar 12 16:16:50 crc kubenswrapper[4687]: E0312 16:16:50.736552 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19\": container with ID starting with 09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19 not found: ID does not exist" containerID="09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.736607 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19"} err="failed to get container status \"09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19\": rpc error: code = NotFound desc = could not find container \"09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19\": container with ID starting with 09106e069e9660cfb79dd814127abf203fdbb00728021201a30c5c97603cae19 not found: ID does not exist" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.736648 4687 scope.go:117] "RemoveContainer" containerID="270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613" Mar 12 16:16:50 crc kubenswrapper[4687]: E0312 16:16:50.736985 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613\": container with ID starting with 270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613 not found: ID does not exist" containerID="270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.737075 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613"} err="failed to get container status \"270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613\": rpc error: code = NotFound desc = could not find container \"270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613\": container with ID starting with 270a5e74fb78723495e624cf78434298d56edee19fe3192999a37863b2c27613 not found: ID does not exist" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.737152 4687 scope.go:117] "RemoveContainer" containerID="889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3" Mar 12 16:16:50 crc kubenswrapper[4687]: E0312 16:16:50.737556 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3\": container with ID starting with 889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3 not found: ID does not exist" containerID="889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3" Mar 12 16:16:50 crc kubenswrapper[4687]: I0312 16:16:50.737584 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3"} err="failed to get container status \"889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3\": rpc error: code = NotFound desc = could not find container \"889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3\": container with ID starting with 889371db5e042a0c08fb097d301e1e78233aa54cf6ab0caba15f6c9a160e52e3 not found: ID does not exist" Mar 12 16:16:51 crc kubenswrapper[4687]: I0312 16:16:51.670885 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerID="5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e" exitCode=0 Mar 12 16:16:51 crc kubenswrapper[4687]: I0312 16:16:51.670928 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6dfp" event={"ID":"a5de4888-1093-45b1-9745-97df7aef8cd7","Type":"ContainerDied","Data":"5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e"} Mar 12 16:16:51 crc kubenswrapper[4687]: I0312 16:16:51.745054 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" path="/var/lib/kubelet/pods/c3fad218-a4aa-4977-94cc-7e643073165b/volumes" Mar 12 16:16:53 crc kubenswrapper[4687]: I0312 16:16:53.693603 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerID="662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec" exitCode=0 Mar 12 16:16:53 crc kubenswrapper[4687]: I0312 16:16:53.693683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6dfp" event={"ID":"a5de4888-1093-45b1-9745-97df7aef8cd7","Type":"ContainerDied","Data":"662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec"} Mar 12 16:16:54 crc kubenswrapper[4687]: I0312 16:16:54.703434 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6dfp" event={"ID":"a5de4888-1093-45b1-9745-97df7aef8cd7","Type":"ContainerStarted","Data":"facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395"} Mar 12 16:16:54 crc kubenswrapper[4687]: I0312 16:16:54.730005 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6dfp" podStartSLOduration=3.241218519 podStartE2EDuration="5.72998371s" podCreationTimestamp="2026-03-12 16:16:49 +0000 UTC" firstStartedPulling="2026-03-12 16:16:51.673074656 +0000 UTC m=+860.637037010" lastFinishedPulling="2026-03-12 16:16:54.161839847 +0000 UTC m=+863.125802201" observedRunningTime="2026-03-12 16:16:54.724284053 +0000 UTC m=+863.688246437" watchObservedRunningTime="2026-03-12 16:16:54.72998371 +0000 UTC m=+863.693946094" Mar 12 16:16:56 crc kubenswrapper[4687]: I0312 16:16:56.647065 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 12 16:16:56 crc kubenswrapper[4687]: I0312 16:16:56.647398 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:16:59 crc kubenswrapper[4687]: I0312 16:16:59.716973 4687 scope.go:117] "RemoveContainer" containerID="ca14ec39b05931bfc595271691001f874b9f86c96b87e8d97b87bdd24947f54d" Mar 12 16:16:59 crc kubenswrapper[4687]: I0312 16:16:59.847819 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:59 crc kubenswrapper[4687]: I0312 16:16:59.848154 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:16:59 crc kubenswrapper[4687]: I0312 16:16:59.898789 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:17:00 crc kubenswrapper[4687]: I0312 16:17:00.783507 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:17:00 crc kubenswrapper[4687]: I0312 16:17:00.826337 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6dfp"] Mar 12 16:17:02 crc kubenswrapper[4687]: I0312 16:17:02.759645 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6dfp" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="registry-server" containerID="cri-o://facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395" gracePeriod=2 Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.159781 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.289982 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6vmf\" (UniqueName: \"kubernetes.io/projected/a5de4888-1093-45b1-9745-97df7aef8cd7-kube-api-access-x6vmf\") pod \"a5de4888-1093-45b1-9745-97df7aef8cd7\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.290116 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-utilities\") pod \"a5de4888-1093-45b1-9745-97df7aef8cd7\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.290241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-catalog-content\") pod \"a5de4888-1093-45b1-9745-97df7aef8cd7\" (UID: \"a5de4888-1093-45b1-9745-97df7aef8cd7\") " Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.291520 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-utilities" (OuterVolumeSpecName: "utilities") pod "a5de4888-1093-45b1-9745-97df7aef8cd7" (UID: "a5de4888-1093-45b1-9745-97df7aef8cd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.312553 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5de4888-1093-45b1-9745-97df7aef8cd7-kube-api-access-x6vmf" (OuterVolumeSpecName: "kube-api-access-x6vmf") pod "a5de4888-1093-45b1-9745-97df7aef8cd7" (UID: "a5de4888-1093-45b1-9745-97df7aef8cd7"). InnerVolumeSpecName "kube-api-access-x6vmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.371109 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5de4888-1093-45b1-9745-97df7aef8cd7" (UID: "a5de4888-1093-45b1-9745-97df7aef8cd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.392507 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.392545 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6vmf\" (UniqueName: \"kubernetes.io/projected/a5de4888-1093-45b1-9745-97df7aef8cd7-kube-api-access-x6vmf\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.392558 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5de4888-1093-45b1-9745-97df7aef8cd7-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.768114 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerID="facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395" exitCode=0 Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.768147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6dfp" event={"ID":"a5de4888-1093-45b1-9745-97df7aef8cd7","Type":"ContainerDied","Data":"facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395"} Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.768175 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6dfp" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.768189 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6dfp" event={"ID":"a5de4888-1093-45b1-9745-97df7aef8cd7","Type":"ContainerDied","Data":"12e9f39c897adf251b85f2cb01001c98308d02022f8e90f14478f5da586de6ef"} Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.768217 4687 scope.go:117] "RemoveContainer" containerID="facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.788918 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6dfp"] Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.797332 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6dfp"] Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.798803 4687 scope.go:117] "RemoveContainer" containerID="662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.820722 4687 scope.go:117] "RemoveContainer" containerID="5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.855516 4687 scope.go:117] "RemoveContainer" containerID="facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395" Mar 12 16:17:03 crc kubenswrapper[4687]: E0312 16:17:03.855957 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395\": container with ID starting with facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395 not found: ID does not exist" containerID="facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.855990 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395"} err="failed to get container status \"facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395\": rpc error: code = NotFound desc = could not find container \"facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395\": container with ID starting with facd1b08f0d31d6064d5668d09ffc84dd144c449cf8dba6c4293ccc4c94d1395 not found: ID does not exist" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.856011 4687 scope.go:117] "RemoveContainer" containerID="662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec" Mar 12 16:17:03 crc kubenswrapper[4687]: E0312 16:17:03.856392 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec\": container with ID starting with 662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec not found: ID does not exist" containerID="662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.856417 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec"} err="failed to get container status \"662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec\": rpc error: code = NotFound desc = could not find container \"662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec\": container with ID starting with 662534c46328f031b55ebc415a4988162acb6756ad66169b381e8b9c973295ec not found: ID does not exist" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.856432 4687 scope.go:117] "RemoveContainer" containerID="5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e" Mar 12 16:17:03 crc kubenswrapper[4687]: E0312 16:17:03.856635 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e\": container with ID starting with 5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e not found: ID does not exist" containerID="5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e" Mar 12 16:17:03 crc kubenswrapper[4687]: I0312 16:17:03.856662 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e"} err="failed to get container status \"5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e\": rpc error: code = NotFound desc = could not find container \"5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e\": container with ID starting with 5fe1072f485d5544ac4998d9ceb8d87ed4393479eaf9e07aee72b2b39c7be03e not found: ID does not exist" Mar 12 16:17:05 crc kubenswrapper[4687]: I0312 16:17:05.740466 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" path="/var/lib/kubelet/pods/a5de4888-1093-45b1-9745-97df7aef8cd7/volumes" Mar 12 16:17:06 crc kubenswrapper[4687]: I0312 16:17:06.646839 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 12 16:17:06 crc kubenswrapper[4687]: I0312 16:17:06.646902 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:17:16 crc kubenswrapper[4687]: I0312 16:17:16.647905 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.867562 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8595"] Mar 12 16:17:17 crc kubenswrapper[4687]: E0312 16:17:17.868072 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="extract-content" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868083 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="extract-content" Mar 12 16:17:17 crc kubenswrapper[4687]: E0312 16:17:17.868098 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="extract-utilities" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868105 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="extract-utilities" Mar 12 16:17:17 crc kubenswrapper[4687]: E0312 16:17:17.868116 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="registry-server" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868122 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="registry-server" Mar 12 16:17:17 crc kubenswrapper[4687]: E0312 16:17:17.868132 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="extract-utilities" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868138 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="extract-utilities" Mar 12 16:17:17 crc kubenswrapper[4687]: E0312 16:17:17.868145 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="registry-server" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868150 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="registry-server" Mar 12 16:17:17 crc kubenswrapper[4687]: E0312 16:17:17.868163 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="extract-content" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868169 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="extract-content" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868279 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3fad218-a4aa-4977-94cc-7e643073165b" containerName="registry-server" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.868290 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5de4888-1093-45b1-9745-97df7aef8cd7" containerName="registry-server" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.878161 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:17 crc kubenswrapper[4687]: I0312 16:17:17.900465 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8595"] Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.016740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-catalog-content\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.016792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78m8f\" (UniqueName: \"kubernetes.io/projected/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-kube-api-access-78m8f\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.016834 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-utilities\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.118731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-catalog-content\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.118789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78m8f\" (UniqueName: \"kubernetes.io/projected/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-kube-api-access-78m8f\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.118836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-utilities\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.119457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-catalog-content\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.119493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-utilities\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.140128 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78m8f\" (UniqueName: \"kubernetes.io/projected/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-kube-api-access-78m8f\") pod \"redhat-marketplace-j8595\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.203962 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.633172 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8595"] Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.872060 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerID="9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78" exitCode=0 Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.872106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerDied","Data":"9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78"} Mar 12 16:17:18 crc kubenswrapper[4687]: I0312 16:17:18.873887 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerStarted","Data":"a162186e8502422b522bc32c343f1ad0a65ee4aa59d2a07f1e2e4cbd871a4a43"} Mar 12 16:17:19 crc kubenswrapper[4687]: I0312 16:17:19.883054 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerStarted","Data":"3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2"} Mar 12 16:17:20 crc kubenswrapper[4687]: I0312 16:17:20.893014 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerID="3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2" exitCode=0 Mar 12 16:17:20 crc kubenswrapper[4687]: I0312 16:17:20.893058 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerDied","Data":"3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2"} Mar 12 16:17:21 crc kubenswrapper[4687]: I0312 16:17:21.901906 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerStarted","Data":"70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a"} Mar 12 16:17:21 crc kubenswrapper[4687]: I0312 16:17:21.931223 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8595" podStartSLOduration=2.475068682 podStartE2EDuration="4.931207944s" podCreationTimestamp="2026-03-12 16:17:17 +0000 UTC" firstStartedPulling="2026-03-12 16:17:18.873328432 +0000 UTC m=+887.837290766" lastFinishedPulling="2026-03-12 16:17:21.329467694 +0000 UTC m=+890.293430028" observedRunningTime="2026-03-12 16:17:21.922907545 +0000 UTC m=+890.886869889" watchObservedRunningTime="2026-03-12 16:17:21.931207944 +0000 UTC m=+890.895170288" Mar 12 16:17:28 crc kubenswrapper[4687]: I0312 16:17:28.204348 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:28 crc kubenswrapper[4687]: I0312 16:17:28.204740 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:28 crc kubenswrapper[4687]: I0312 16:17:28.248761 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:29 crc kubenswrapper[4687]: I0312 16:17:29.008649 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:29 crc kubenswrapper[4687]: I0312 16:17:29.059209 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8595"] Mar 12 16:17:30 crc kubenswrapper[4687]: I0312 16:17:30.963990 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8595" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="registry-server" containerID="cri-o://70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a" gracePeriod=2 Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.406080 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.526631 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-catalog-content\") pod \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.526962 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78m8f\" (UniqueName: \"kubernetes.io/projected/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-kube-api-access-78m8f\") pod \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.527085 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-utilities\") pod \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\" (UID: \"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576\") " Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.527879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-utilities" (OuterVolumeSpecName: "utilities") pod "5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" (UID: "5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.538378 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-kube-api-access-78m8f" (OuterVolumeSpecName: "kube-api-access-78m8f") pod "5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" (UID: "5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576"). InnerVolumeSpecName "kube-api-access-78m8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.550913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" (UID: "5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.628684 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.628718 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78m8f\" (UniqueName: \"kubernetes.io/projected/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-kube-api-access-78m8f\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.628729 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.971639 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerID="70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a" exitCode=0 Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.971689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerDied","Data":"70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a"} Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.971909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8595" event={"ID":"5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576","Type":"ContainerDied","Data":"a162186e8502422b522bc32c343f1ad0a65ee4aa59d2a07f1e2e4cbd871a4a43"} Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.971931 4687 scope.go:117] "RemoveContainer" containerID="70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.971709 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8595" Mar 12 16:17:31 crc kubenswrapper[4687]: I0312 16:17:31.994257 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8595"] Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.000801 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8595"] Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.014429 4687 scope.go:117] "RemoveContainer" containerID="3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.035694 4687 scope.go:117] "RemoveContainer" containerID="9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.074077 4687 scope.go:117] "RemoveContainer" containerID="70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a" Mar 12 16:17:32 crc kubenswrapper[4687]: E0312 16:17:32.074537 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a\": container with ID starting with 70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a not found: ID does not exist" containerID="70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.074578 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a"} err="failed to get container status \"70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a\": rpc error: code = NotFound desc = could not find container \"70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a\": container with ID starting with 70349b8fd7baeb61ff555ab925f0563d79d969fa77d8018634b0f79048ab1c3a not found: ID does not exist" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.074598 4687 scope.go:117] "RemoveContainer" containerID="3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2" Mar 12 16:17:32 crc kubenswrapper[4687]: E0312 16:17:32.074889 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2\": container with ID starting with 3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2 not found: ID does not exist" containerID="3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.074918 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2"} err="failed to get container status \"3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2\": rpc error: code = NotFound desc = could not find container \"3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2\": container with ID starting with 3c01d46e6d4cdf49550170de14a2ed2690e15283d9f1957d5f0123c8ba225da2 not found: ID does not exist" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.074937 4687 scope.go:117] "RemoveContainer" containerID="9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78" Mar 12 16:17:32 crc kubenswrapper[4687]: E0312 16:17:32.075157 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78\": container with ID starting with 9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78 not found: ID does not exist" containerID="9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78" Mar 12 16:17:32 crc kubenswrapper[4687]: I0312 16:17:32.075187 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78"} err="failed to get container status \"9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78\": rpc error: code = NotFound desc = could not find container \"9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78\": container with ID starting with 9cc5bb19c3a4e7067c78481cbcaa8eddc7ed75084fdb74b60cda389acee48b78 not found: ID does not exist" Mar 12 16:17:33 crc kubenswrapper[4687]: I0312 16:17:33.746160 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" path="/var/lib/kubelet/pods/5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576/volumes" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.559382 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-pqh4t"] Mar 12 16:17:34 crc kubenswrapper[4687]: E0312 16:17:34.560110 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="extract-content" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.560136 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="extract-content" Mar 12 16:17:34 crc kubenswrapper[4687]: E0312 16:17:34.560167 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="extract-utilities" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.560180 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="extract-utilities" Mar 12 16:17:34 crc kubenswrapper[4687]: E0312 16:17:34.560212 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="registry-server" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.560225 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="registry-server" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.560452 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d510e1f-3fb7-4f3b-9655-bc3cf2ed9576" containerName="registry-server" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.561336 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.565248 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.565595 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.566588 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-p2fxk" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.567062 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.567681 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.574536 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.601310 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pqh4t"] Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.643229 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pqh4t"] Mar 12 16:17:34 crc kubenswrapper[4687]: E0312 16:17:34.643880 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-bdvs8 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-pqh4t" podUID="2dd88ebc-a139-40d6-95ff-63269ea7d2ab" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.675838 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config-openshift-service-cacrt\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.675994 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676059 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-sa-token\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676096 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-syslog-receiver\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-tmp\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676169 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-metrics\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676260 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-trusted-ca\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-entrypoint\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676397 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvs8\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-kube-api-access-bdvs8\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676478 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-token\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.676611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-datadir\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.777889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-token\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.777936 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-datadir\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.777997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config-openshift-service-cacrt\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-sa-token\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778090 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-syslog-receiver\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778116 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-tmp\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778136 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-metrics\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778172 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-trusted-ca\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778192 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-entrypoint\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778245 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvs8\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-kube-api-access-bdvs8\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.778108 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-datadir\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.779036 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config-openshift-service-cacrt\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.779111 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.779676 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-entrypoint\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.779769 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-trusted-ca\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.786748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-tmp\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.787211 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-metrics\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.787214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-syslog-receiver\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.787274 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-token\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.795711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-sa-token\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.797397 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvs8\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-kube-api-access-bdvs8\") pod \"collector-pqh4t\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " pod="openshift-logging/collector-pqh4t" Mar 12 16:17:34 crc kubenswrapper[4687]: I0312 16:17:34.995423 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqh4t" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.004133 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqh4t" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.183927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-datadir\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184083 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-trusted-ca\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184059 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-datadir" (OuterVolumeSpecName: "datadir") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184201 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-syslog-receiver\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184271 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdvs8\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-kube-api-access-bdvs8\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184334 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-entrypoint\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-metrics\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-token\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184562 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config-openshift-service-cacrt\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184640 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-tmp\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184689 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-sa-token\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.184756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config\") pod \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\" (UID: \"2dd88ebc-a139-40d6-95ff-63269ea7d2ab\") " Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185602 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config" (OuterVolumeSpecName: "config") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185942 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185968 4687 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-datadir\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185982 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.185995 4687 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.186007 4687 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.187685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-kube-api-access-bdvs8" (OuterVolumeSpecName: "kube-api-access-bdvs8") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "kube-api-access-bdvs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.188463 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.188602 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-sa-token" (OuterVolumeSpecName: "sa-token") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.189519 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-tmp" (OuterVolumeSpecName: "tmp") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.190188 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-token" (OuterVolumeSpecName: "collector-token") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.192139 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-metrics" (OuterVolumeSpecName: "metrics") pod "2dd88ebc-a139-40d6-95ff-63269ea7d2ab" (UID: "2dd88ebc-a139-40d6-95ff-63269ea7d2ab"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.287350 4687 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.287440 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdvs8\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-kube-api-access-bdvs8\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.287470 4687 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.287492 4687 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-collector-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.287511 4687 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-tmp\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:35 crc kubenswrapper[4687]: I0312 16:17:35.287529 4687 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2dd88ebc-a139-40d6-95ff-63269ea7d2ab-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.003751 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pqh4t" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.087875 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pqh4t"] Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.095074 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-pqh4t"] Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.102003 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-5rr22"] Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.102991 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.110454 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5rr22"] Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.113304 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.113763 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-p2fxk" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.113964 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.114459 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.115762 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.122270 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.200826 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-sa-token\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.200880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-trusted-ca\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.200956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-entrypoint\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.200981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-token\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201112 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-syslog-receiver\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07b53efc-2b5d-409e-a999-9504a96ff173-datadir\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-metrics\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b53efc-2b5d-409e-a999-9504a96ff173-tmp\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201299 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config-openshift-service-cacrt\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.201325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghbx\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-kube-api-access-8ghbx\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.302845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.302909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config-openshift-service-cacrt\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.302937 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ghbx\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-kube-api-access-8ghbx\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.302989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-sa-token\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-trusted-ca\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303081 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-entrypoint\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303108 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-token\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-syslog-receiver\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07b53efc-2b5d-409e-a999-9504a96ff173-datadir\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-metrics\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b53efc-2b5d-409e-a999-9504a96ff173-tmp\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303438 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07b53efc-2b5d-409e-a999-9504a96ff173-datadir\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.303931 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config-openshift-service-cacrt\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.304664 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-trusted-ca\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.305402 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-entrypoint\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.307210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-token\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.307648 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-syslog-receiver\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.308779 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b53efc-2b5d-409e-a999-9504a96ff173-tmp\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.320152 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-metrics\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.321141 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ghbx\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-kube-api-access-8ghbx\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.329558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-sa-token\") pod \"collector-5rr22\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.427308 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5rr22" Mar 12 16:17:36 crc kubenswrapper[4687]: I0312 16:17:36.851845 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-5rr22"] Mar 12 16:17:36 crc kubenswrapper[4687]: W0312 16:17:36.859567 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07b53efc_2b5d_409e_a999_9504a96ff173.slice/crio-7866a04618f8449a63fc1e7fb229fe2fac24661576f05de2ee732d3b0a80e675 WatchSource:0}: Error finding container 7866a04618f8449a63fc1e7fb229fe2fac24661576f05de2ee732d3b0a80e675: Status 404 returned error can't find the container with id 7866a04618f8449a63fc1e7fb229fe2fac24661576f05de2ee732d3b0a80e675 Mar 12 16:17:37 crc kubenswrapper[4687]: I0312 16:17:37.011850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5rr22" event={"ID":"07b53efc-2b5d-409e-a999-9504a96ff173","Type":"ContainerStarted","Data":"7866a04618f8449a63fc1e7fb229fe2fac24661576f05de2ee732d3b0a80e675"} Mar 12 16:17:37 crc kubenswrapper[4687]: I0312 16:17:37.754890 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd88ebc-a139-40d6-95ff-63269ea7d2ab" path="/var/lib/kubelet/pods/2dd88ebc-a139-40d6-95ff-63269ea7d2ab/volumes" Mar 12 16:17:44 crc kubenswrapper[4687]: I0312 16:17:44.060042 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5rr22" event={"ID":"07b53efc-2b5d-409e-a999-9504a96ff173","Type":"ContainerStarted","Data":"a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6"} Mar 12 16:17:44 crc kubenswrapper[4687]: I0312 16:17:44.081246 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-5rr22" podStartSLOduration=1.54973341 podStartE2EDuration="8.081227984s" podCreationTimestamp="2026-03-12 16:17:36 +0000 UTC" firstStartedPulling="2026-03-12 16:17:36.861287607 +0000 UTC m=+905.825249941" lastFinishedPulling="2026-03-12 16:17:43.392782171 +0000 UTC m=+912.356744515" observedRunningTime="2026-03-12 16:17:44.079004873 +0000 UTC m=+913.042967227" watchObservedRunningTime="2026-03-12 16:17:44.081227984 +0000 UTC m=+913.045190328" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.143836 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555538-m77gz"] Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.146637 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.151697 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.151714 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.152337 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.165708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555538-m77gz"] Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.307648 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq87g\" (UniqueName: \"kubernetes.io/projected/817df5b4-8432-4fb7-a823-330d30f4bb59-kube-api-access-mq87g\") pod \"auto-csr-approver-29555538-m77gz\" (UID: \"817df5b4-8432-4fb7-a823-330d30f4bb59\") " pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.409658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq87g\" (UniqueName: \"kubernetes.io/projected/817df5b4-8432-4fb7-a823-330d30f4bb59-kube-api-access-mq87g\") pod \"auto-csr-approver-29555538-m77gz\" (UID: \"817df5b4-8432-4fb7-a823-330d30f4bb59\") " pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.433282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq87g\" (UniqueName: \"kubernetes.io/projected/817df5b4-8432-4fb7-a823-330d30f4bb59-kube-api-access-mq87g\") pod \"auto-csr-approver-29555538-m77gz\" (UID: \"817df5b4-8432-4fb7-a823-330d30f4bb59\") " pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.468459 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:00 crc kubenswrapper[4687]: I0312 16:18:00.861086 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555538-m77gz"] Mar 12 16:18:00 crc kubenswrapper[4687]: W0312 16:18:00.870602 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod817df5b4_8432_4fb7_a823_330d30f4bb59.slice/crio-b3bb560e5afbdde8b7b67ecf89b409fd9d8071370909dea17feaabf5ef1f0aeb WatchSource:0}: Error finding container b3bb560e5afbdde8b7b67ecf89b409fd9d8071370909dea17feaabf5ef1f0aeb: Status 404 returned error can't find the container with id b3bb560e5afbdde8b7b67ecf89b409fd9d8071370909dea17feaabf5ef1f0aeb Mar 12 16:18:01 crc kubenswrapper[4687]: I0312 16:18:01.222486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555538-m77gz" event={"ID":"817df5b4-8432-4fb7-a823-330d30f4bb59","Type":"ContainerStarted","Data":"b3bb560e5afbdde8b7b67ecf89b409fd9d8071370909dea17feaabf5ef1f0aeb"} Mar 12 16:18:03 crc kubenswrapper[4687]: I0312 16:18:03.244886 4687 generic.go:334] "Generic (PLEG): container finished" podID="817df5b4-8432-4fb7-a823-330d30f4bb59" containerID="80d3001c8aeacbafac09cb9e169c90812443f58c6190dc04a9b61436a60ca061" exitCode=0 Mar 12 16:18:03 crc kubenswrapper[4687]: I0312 16:18:03.244964 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555538-m77gz" event={"ID":"817df5b4-8432-4fb7-a823-330d30f4bb59","Type":"ContainerDied","Data":"80d3001c8aeacbafac09cb9e169c90812443f58c6190dc04a9b61436a60ca061"} Mar 12 16:18:04 crc kubenswrapper[4687]: I0312 16:18:04.590601 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:04 crc kubenswrapper[4687]: I0312 16:18:04.675044 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq87g\" (UniqueName: \"kubernetes.io/projected/817df5b4-8432-4fb7-a823-330d30f4bb59-kube-api-access-mq87g\") pod \"817df5b4-8432-4fb7-a823-330d30f4bb59\" (UID: \"817df5b4-8432-4fb7-a823-330d30f4bb59\") " Mar 12 16:18:04 crc kubenswrapper[4687]: I0312 16:18:04.682400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817df5b4-8432-4fb7-a823-330d30f4bb59-kube-api-access-mq87g" (OuterVolumeSpecName: "kube-api-access-mq87g") pod "817df5b4-8432-4fb7-a823-330d30f4bb59" (UID: "817df5b4-8432-4fb7-a823-330d30f4bb59"). InnerVolumeSpecName "kube-api-access-mq87g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:18:04 crc kubenswrapper[4687]: I0312 16:18:04.777137 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq87g\" (UniqueName: \"kubernetes.io/projected/817df5b4-8432-4fb7-a823-330d30f4bb59-kube-api-access-mq87g\") on node \"crc\" DevicePath \"\"" Mar 12 16:18:05 crc kubenswrapper[4687]: I0312 16:18:05.263938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555538-m77gz" event={"ID":"817df5b4-8432-4fb7-a823-330d30f4bb59","Type":"ContainerDied","Data":"b3bb560e5afbdde8b7b67ecf89b409fd9d8071370909dea17feaabf5ef1f0aeb"} Mar 12 16:18:05 crc kubenswrapper[4687]: I0312 16:18:05.264179 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3bb560e5afbdde8b7b67ecf89b409fd9d8071370909dea17feaabf5ef1f0aeb" Mar 12 16:18:05 crc kubenswrapper[4687]: I0312 16:18:05.264050 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555538-m77gz" Mar 12 16:18:05 crc kubenswrapper[4687]: I0312 16:18:05.653339 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555532-dxrn2"] Mar 12 16:18:05 crc kubenswrapper[4687]: I0312 16:18:05.656241 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555532-dxrn2"] Mar 12 16:18:05 crc kubenswrapper[4687]: I0312 16:18:05.742632 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c209e8d9-555d-4362-a44a-0279015e41f1" path="/var/lib/kubelet/pods/c209e8d9-555d-4362-a44a-0279015e41f1/volumes" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.404769 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt"] Mar 12 16:18:13 crc kubenswrapper[4687]: E0312 16:18:13.405670 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817df5b4-8432-4fb7-a823-330d30f4bb59" containerName="oc" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.405687 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="817df5b4-8432-4fb7-a823-330d30f4bb59" containerName="oc" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.405843 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="817df5b4-8432-4fb7-a823-330d30f4bb59" containerName="oc" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.407114 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.409192 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.410570 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt"] Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.534858 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.534929 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xclk\" (UniqueName: \"kubernetes.io/projected/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-kube-api-access-7xclk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.534986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.635969 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.636063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.636128 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xclk\" (UniqueName: \"kubernetes.io/projected/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-kube-api-access-7xclk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.636452 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.636497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.659380 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xclk\" (UniqueName: \"kubernetes.io/projected/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-kube-api-access-7xclk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:13 crc kubenswrapper[4687]: I0312 16:18:13.723708 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:14 crc kubenswrapper[4687]: I0312 16:18:14.098802 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt"] Mar 12 16:18:14 crc kubenswrapper[4687]: I0312 16:18:14.122207 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:18:14 crc kubenswrapper[4687]: I0312 16:18:14.122264 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:18:14 crc kubenswrapper[4687]: I0312 16:18:14.328677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" event={"ID":"2ac13e69-6be5-4b89-9a5d-9c535b368b5e","Type":"ContainerStarted","Data":"7cb10588b3e7144d44b353265b20aea02fd2e274210c4aba0da5943cbf5c2c22"} Mar 12 16:18:14 crc kubenswrapper[4687]: I0312 16:18:14.328728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" event={"ID":"2ac13e69-6be5-4b89-9a5d-9c535b368b5e","Type":"ContainerStarted","Data":"e99e84ef6893105cce0c4c1ed8426437ff3ed67de802f06f021a5fe06688eee8"} Mar 12 16:18:15 crc kubenswrapper[4687]: I0312 16:18:15.336623 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerID="7cb10588b3e7144d44b353265b20aea02fd2e274210c4aba0da5943cbf5c2c22" exitCode=0 Mar 12 16:18:15 crc kubenswrapper[4687]: I0312 16:18:15.336919 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" event={"ID":"2ac13e69-6be5-4b89-9a5d-9c535b368b5e","Type":"ContainerDied","Data":"7cb10588b3e7144d44b353265b20aea02fd2e274210c4aba0da5943cbf5c2c22"} Mar 12 16:18:17 crc kubenswrapper[4687]: I0312 16:18:17.352369 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerID="76352bef9c23a232972feff9a220cdf46fe58dceb48d59d52c1d5049062b4122" exitCode=0 Mar 12 16:18:17 crc kubenswrapper[4687]: I0312 16:18:17.352506 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" event={"ID":"2ac13e69-6be5-4b89-9a5d-9c535b368b5e","Type":"ContainerDied","Data":"76352bef9c23a232972feff9a220cdf46fe58dceb48d59d52c1d5049062b4122"} Mar 12 16:18:18 crc kubenswrapper[4687]: I0312 16:18:18.363325 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerID="ccf8ca622050c6b8528b83db7bfea5da35f823360fb5e281807b6a62f440f6b0" exitCode=0 Mar 12 16:18:18 crc kubenswrapper[4687]: I0312 16:18:18.363413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" event={"ID":"2ac13e69-6be5-4b89-9a5d-9c535b368b5e","Type":"ContainerDied","Data":"ccf8ca622050c6b8528b83db7bfea5da35f823360fb5e281807b6a62f440f6b0"} Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.664985 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.735779 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xclk\" (UniqueName: \"kubernetes.io/projected/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-kube-api-access-7xclk\") pod \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.735927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-bundle\") pod \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.735981 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-util\") pod \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\" (UID: \"2ac13e69-6be5-4b89-9a5d-9c535b368b5e\") " Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.737057 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-bundle" (OuterVolumeSpecName: "bundle") pod "2ac13e69-6be5-4b89-9a5d-9c535b368b5e" (UID: "2ac13e69-6be5-4b89-9a5d-9c535b368b5e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.740795 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-kube-api-access-7xclk" (OuterVolumeSpecName: "kube-api-access-7xclk") pod "2ac13e69-6be5-4b89-9a5d-9c535b368b5e" (UID: "2ac13e69-6be5-4b89-9a5d-9c535b368b5e"). InnerVolumeSpecName "kube-api-access-7xclk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.838025 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:18:19 crc kubenswrapper[4687]: I0312 16:18:19.838077 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xclk\" (UniqueName: \"kubernetes.io/projected/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-kube-api-access-7xclk\") on node \"crc\" DevicePath \"\"" Mar 12 16:18:20 crc kubenswrapper[4687]: I0312 16:18:20.027144 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-util" (OuterVolumeSpecName: "util") pod "2ac13e69-6be5-4b89-9a5d-9c535b368b5e" (UID: "2ac13e69-6be5-4b89-9a5d-9c535b368b5e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:18:20 crc kubenswrapper[4687]: I0312 16:18:20.042180 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ac13e69-6be5-4b89-9a5d-9c535b368b5e-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:18:20 crc kubenswrapper[4687]: I0312 16:18:20.385142 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" event={"ID":"2ac13e69-6be5-4b89-9a5d-9c535b368b5e","Type":"ContainerDied","Data":"e99e84ef6893105cce0c4c1ed8426437ff3ed67de802f06f021a5fe06688eee8"} Mar 12 16:18:20 crc kubenswrapper[4687]: I0312 16:18:20.385186 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99e84ef6893105cce0c4c1ed8426437ff3ed67de802f06f021a5fe06688eee8" Mar 12 16:18:20 crc kubenswrapper[4687]: I0312 16:18:20.385262 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.038208 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq"] Mar 12 16:18:23 crc kubenswrapper[4687]: E0312 16:18:23.038752 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="extract" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.038766 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="extract" Mar 12 16:18:23 crc kubenswrapper[4687]: E0312 16:18:23.038789 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="pull" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.038796 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="pull" Mar 12 16:18:23 crc kubenswrapper[4687]: E0312 16:18:23.038821 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="util" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.038829 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="util" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.038985 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac13e69-6be5-4b89-9a5d-9c535b368b5e" containerName="extract" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.039660 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.044101 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rgkqt" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.044330 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.044797 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.057039 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq"] Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.185925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcsgw\" (UniqueName: \"kubernetes.io/projected/7e98880b-25b5-4e73-a4aa-aa70c426dc07-kube-api-access-wcsgw\") pod \"nmstate-operator-796d4cfff4-fz9qq\" (UID: \"7e98880b-25b5-4e73-a4aa-aa70c426dc07\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.287380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcsgw\" (UniqueName: \"kubernetes.io/projected/7e98880b-25b5-4e73-a4aa-aa70c426dc07-kube-api-access-wcsgw\") pod \"nmstate-operator-796d4cfff4-fz9qq\" (UID: \"7e98880b-25b5-4e73-a4aa-aa70c426dc07\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.313068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcsgw\" (UniqueName: \"kubernetes.io/projected/7e98880b-25b5-4e73-a4aa-aa70c426dc07-kube-api-access-wcsgw\") pod \"nmstate-operator-796d4cfff4-fz9qq\" (UID: \"7e98880b-25b5-4e73-a4aa-aa70c426dc07\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.360070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" Mar 12 16:18:23 crc kubenswrapper[4687]: I0312 16:18:23.767036 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq"] Mar 12 16:18:24 crc kubenswrapper[4687]: I0312 16:18:24.412642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" event={"ID":"7e98880b-25b5-4e73-a4aa-aa70c426dc07","Type":"ContainerStarted","Data":"a004c726d12e888ddc9a4f08f3350ad5f477d6b44d837360bc644c2777a28df3"} Mar 12 16:18:26 crc kubenswrapper[4687]: I0312 16:18:26.426322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" event={"ID":"7e98880b-25b5-4e73-a4aa-aa70c426dc07","Type":"ContainerStarted","Data":"aff5ecab8a1a351a85d2940769f3fcdeb006c381b1ff617983af567addd95789"} Mar 12 16:18:26 crc kubenswrapper[4687]: I0312 16:18:26.444140 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-fz9qq" podStartSLOduration=1.01740402 podStartE2EDuration="3.444122778s" podCreationTimestamp="2026-03-12 16:18:23 +0000 UTC" firstStartedPulling="2026-03-12 16:18:23.773751827 +0000 UTC m=+952.737714181" lastFinishedPulling="2026-03-12 16:18:26.200470595 +0000 UTC m=+955.164432939" observedRunningTime="2026-03-12 16:18:26.437547385 +0000 UTC m=+955.401509729" watchObservedRunningTime="2026-03-12 16:18:26.444122778 +0000 UTC m=+955.408085142" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.452855 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.454154 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.456897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pdfcr" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.458190 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jp88k"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.459099 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.464058 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.497004 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jp88k"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.545423 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.558600 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pjvcq"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.559405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8ls\" (UniqueName: \"kubernetes.io/projected/3e0f8548-f03f-4f9e-a422-13093d87d32e-kube-api-access-np8ls\") pod \"nmstate-metrics-9b8c8685d-7b5ps\" (UID: \"3e0f8548-f03f-4f9e-a422-13093d87d32e\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.559472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/97a0494c-2509-4e76-afd9-fd2be9482d5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jp88k\" (UID: \"97a0494c-2509-4e76-afd9-fd2be9482d5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.559514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhkc\" (UniqueName: \"kubernetes.io/projected/97a0494c-2509-4e76-afd9-fd2be9482d5d-kube-api-access-4fhkc\") pod \"nmstate-webhook-5f558f5558-jp88k\" (UID: \"97a0494c-2509-4e76-afd9-fd2be9482d5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.559576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.617228 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.618140 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.619760 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.619957 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2zkxk" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.620720 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.633236 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdlx\" (UniqueName: \"kubernetes.io/projected/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-kube-api-access-wzdlx\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-ovs-socket\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8ls\" (UniqueName: \"kubernetes.io/projected/3e0f8548-f03f-4f9e-a422-13093d87d32e-kube-api-access-np8ls\") pod \"nmstate-metrics-9b8c8685d-7b5ps\" (UID: \"3e0f8548-f03f-4f9e-a422-13093d87d32e\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661597 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-dbus-socket\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661647 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/97a0494c-2509-4e76-afd9-fd2be9482d5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jp88k\" (UID: \"97a0494c-2509-4e76-afd9-fd2be9482d5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhkc\" (UniqueName: \"kubernetes.io/projected/97a0494c-2509-4e76-afd9-fd2be9482d5d-kube-api-access-4fhkc\") pod \"nmstate-webhook-5f558f5558-jp88k\" (UID: \"97a0494c-2509-4e76-afd9-fd2be9482d5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.661705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-nmstate-lock\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.669989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/97a0494c-2509-4e76-afd9-fd2be9482d5d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jp88k\" (UID: \"97a0494c-2509-4e76-afd9-fd2be9482d5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.680431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhkc\" (UniqueName: \"kubernetes.io/projected/97a0494c-2509-4e76-afd9-fd2be9482d5d-kube-api-access-4fhkc\") pod \"nmstate-webhook-5f558f5558-jp88k\" (UID: \"97a0494c-2509-4e76-afd9-fd2be9482d5d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.684613 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8ls\" (UniqueName: \"kubernetes.io/projected/3e0f8548-f03f-4f9e-a422-13093d87d32e-kube-api-access-np8ls\") pod \"nmstate-metrics-9b8c8685d-7b5ps\" (UID: \"3e0f8548-f03f-4f9e-a422-13093d87d32e\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-dbus-socket\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763452 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6c4c\" (UniqueName: \"kubernetes.io/projected/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-kube-api-access-j6c4c\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763526 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-nmstate-lock\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdlx\" (UniqueName: \"kubernetes.io/projected/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-kube-api-access-wzdlx\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763609 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-ovs-socket\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-ovs-socket\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.763745 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-nmstate-lock\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.764211 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-dbus-socket\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.774144 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.794799 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.803197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdlx\" (UniqueName: \"kubernetes.io/projected/40b2acd2-7fab-41ca-9ba5-7f8a5dc50606-kube-api-access-wzdlx\") pod \"nmstate-handler-pjvcq\" (UID: \"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606\") " pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.835548 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6775b889d8-xvw66"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.837279 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.851627 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6775b889d8-xvw66"] Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.866480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6c4c\" (UniqueName: \"kubernetes.io/projected/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-kube-api-access-j6c4c\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.866542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.866613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: E0312 16:18:27.866857 4687 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 12 16:18:27 crc kubenswrapper[4687]: E0312 16:18:27.866924 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-plugin-serving-cert podName:0dbc8317-dcca-4eeb-a5c7-ec72be4a0278 nodeName:}" failed. No retries permitted until 2026-03-12 16:18:28.366905734 +0000 UTC m=+957.330868078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-sw8zc" (UID: "0dbc8317-dcca-4eeb-a5c7-ec72be4a0278") : secret "plugin-serving-cert" not found Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.868184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.880160 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.910584 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6c4c\" (UniqueName: \"kubernetes.io/projected/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-kube-api-access-j6c4c\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978436 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-serving-cert\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-service-ca\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-oauth-serving-cert\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978636 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-trusted-ca-bundle\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978726 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-oauth-config\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978794 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh25s\" (UniqueName: \"kubernetes.io/projected/f94357ad-9ce2-48be-b148-23cb9f8c0621-kube-api-access-rh25s\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:27 crc kubenswrapper[4687]: I0312 16:18:27.978820 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-config\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081352 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-trusted-ca-bundle\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-oauth-config\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh25s\" (UniqueName: \"kubernetes.io/projected/f94357ad-9ce2-48be-b148-23cb9f8c0621-kube-api-access-rh25s\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-config\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081546 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-serving-cert\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-service-ca\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.081600 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-oauth-serving-cert\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.082451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-trusted-ca-bundle\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.082673 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-config\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.083142 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-oauth-serving-cert\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.083280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-service-ca\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.085182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-oauth-config\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.088981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-serving-cert\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.100973 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh25s\" (UniqueName: \"kubernetes.io/projected/f94357ad-9ce2-48be-b148-23cb9f8c0621-kube-api-access-rh25s\") pod \"console-6775b889d8-xvw66\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.198704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.337887 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jp88k"] Mar 12 16:18:28 crc kubenswrapper[4687]: W0312 16:18:28.344484 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a0494c_2509_4e76_afd9_fd2be9482d5d.slice/crio-eb1801d5f46e04f9d250f612b3f98938a5427b66a24018504a6b6851f4821e0b WatchSource:0}: Error finding container eb1801d5f46e04f9d250f612b3f98938a5427b66a24018504a6b6851f4821e0b: Status 404 returned error can't find the container with id eb1801d5f46e04f9d250f612b3f98938a5427b66a24018504a6b6851f4821e0b Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.385439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.390086 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps"] Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.394999 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0dbc8317-dcca-4eeb-a5c7-ec72be4a0278-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-sw8zc\" (UID: \"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:28 crc kubenswrapper[4687]: W0312 16:18:28.404806 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0f8548_f03f_4f9e_a422_13093d87d32e.slice/crio-ac234436dd1178e8f39059caa7f6947408c56f9f360692412a4dffa0b8e16bda WatchSource:0}: Error finding container ac234436dd1178e8f39059caa7f6947408c56f9f360692412a4dffa0b8e16bda: Status 404 returned error can't find the container with id ac234436dd1178e8f39059caa7f6947408c56f9f360692412a4dffa0b8e16bda Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.440484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pjvcq" event={"ID":"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606","Type":"ContainerStarted","Data":"cab4d3be5b693d8b994669c591a0c21a3a9be61894fd5df867ee1f42816fa24d"} Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.441395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" event={"ID":"3e0f8548-f03f-4f9e-a422-13093d87d32e","Type":"ContainerStarted","Data":"ac234436dd1178e8f39059caa7f6947408c56f9f360692412a4dffa0b8e16bda"} Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.442447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" event={"ID":"97a0494c-2509-4e76-afd9-fd2be9482d5d","Type":"ContainerStarted","Data":"eb1801d5f46e04f9d250f612b3f98938a5427b66a24018504a6b6851f4821e0b"} Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.539891 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.657801 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6775b889d8-xvw66"] Mar 12 16:18:28 crc kubenswrapper[4687]: I0312 16:18:28.939921 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc"] Mar 12 16:18:28 crc kubenswrapper[4687]: W0312 16:18:28.942318 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dbc8317_dcca_4eeb_a5c7_ec72be4a0278.slice/crio-0b1e46757b61fb5febeaa5098f444995a394935d6f69812a5e41aa9fa332116f WatchSource:0}: Error finding container 0b1e46757b61fb5febeaa5098f444995a394935d6f69812a5e41aa9fa332116f: Status 404 returned error can't find the container with id 0b1e46757b61fb5febeaa5098f444995a394935d6f69812a5e41aa9fa332116f Mar 12 16:18:29 crc kubenswrapper[4687]: I0312 16:18:29.452937 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" event={"ID":"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278","Type":"ContainerStarted","Data":"0b1e46757b61fb5febeaa5098f444995a394935d6f69812a5e41aa9fa332116f"} Mar 12 16:18:29 crc kubenswrapper[4687]: I0312 16:18:29.454627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6775b889d8-xvw66" event={"ID":"f94357ad-9ce2-48be-b148-23cb9f8c0621","Type":"ContainerStarted","Data":"3e493f37738432e3542fd7fd7ee440101b1fb1f8f45f94af567c20c16470f5cd"} Mar 12 16:18:29 crc kubenswrapper[4687]: I0312 16:18:29.454678 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6775b889d8-xvw66" event={"ID":"f94357ad-9ce2-48be-b148-23cb9f8c0621","Type":"ContainerStarted","Data":"eeda4ea8eeb869d5f952a404396f23f18136643cc933da50d07289ad7a1f92be"} Mar 12 16:18:29 crc kubenswrapper[4687]: I0312 16:18:29.475482 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6775b889d8-xvw66" podStartSLOduration=2.475465308 podStartE2EDuration="2.475465308s" podCreationTimestamp="2026-03-12 16:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:18:29.468933206 +0000 UTC m=+958.432895590" watchObservedRunningTime="2026-03-12 16:18:29.475465308 +0000 UTC m=+958.439427652" Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.473016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" event={"ID":"3e0f8548-f03f-4f9e-a422-13093d87d32e","Type":"ContainerStarted","Data":"8300f98871c99f4c54291c4114f3af81c4fee7004bee7e048288fc59306561a9"} Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.474432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" event={"ID":"97a0494c-2509-4e76-afd9-fd2be9482d5d","Type":"ContainerStarted","Data":"4620b26fe3d9905f5e08ba2bc545e65416d9226dec621fefb7a7b83ebbee1f9a"} Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.474516 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.478084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pjvcq" event={"ID":"40b2acd2-7fab-41ca-9ba5-7f8a5dc50606","Type":"ContainerStarted","Data":"c3c869fb37b348bf642f366d4156b5718ebf2b76df94c7a40fccaac03da6a782"} Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.478222 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.489893 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" podStartSLOduration=2.097687921 podStartE2EDuration="4.489877014s" podCreationTimestamp="2026-03-12 16:18:27 +0000 UTC" firstStartedPulling="2026-03-12 16:18:28.34879219 +0000 UTC m=+957.312754534" lastFinishedPulling="2026-03-12 16:18:30.740981283 +0000 UTC m=+959.704943627" observedRunningTime="2026-03-12 16:18:31.486713506 +0000 UTC m=+960.450675850" watchObservedRunningTime="2026-03-12 16:18:31.489877014 +0000 UTC m=+960.453839358" Mar 12 16:18:31 crc kubenswrapper[4687]: I0312 16:18:31.515931 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pjvcq" podStartSLOduration=1.73236234 podStartE2EDuration="4.515913991s" podCreationTimestamp="2026-03-12 16:18:27 +0000 UTC" firstStartedPulling="2026-03-12 16:18:27.950304673 +0000 UTC m=+956.914267007" lastFinishedPulling="2026-03-12 16:18:30.733856314 +0000 UTC m=+959.697818658" observedRunningTime="2026-03-12 16:18:31.509513162 +0000 UTC m=+960.473475516" watchObservedRunningTime="2026-03-12 16:18:31.515913991 +0000 UTC m=+960.479876325" Mar 12 16:18:32 crc kubenswrapper[4687]: I0312 16:18:32.488289 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" event={"ID":"0dbc8317-dcca-4eeb-a5c7-ec72be4a0278","Type":"ContainerStarted","Data":"170086d4c7fcfa33700ac7d2df4f0ecbeabfa7134038e4719ee9417cbc79bf11"} Mar 12 16:18:32 crc kubenswrapper[4687]: I0312 16:18:32.505178 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-sw8zc" podStartSLOduration=2.646765981 podStartE2EDuration="5.505159582s" podCreationTimestamp="2026-03-12 16:18:27 +0000 UTC" firstStartedPulling="2026-03-12 16:18:28.945539762 +0000 UTC m=+957.909502106" lastFinishedPulling="2026-03-12 16:18:31.803933363 +0000 UTC m=+960.767895707" observedRunningTime="2026-03-12 16:18:32.500995556 +0000 UTC m=+961.464957890" watchObservedRunningTime="2026-03-12 16:18:32.505159582 +0000 UTC m=+961.469121926" Mar 12 16:18:34 crc kubenswrapper[4687]: I0312 16:18:34.505618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" event={"ID":"3e0f8548-f03f-4f9e-a422-13093d87d32e","Type":"ContainerStarted","Data":"1162022cf71c1b6bfa888e40abc7402fba2047552356952c7c4b539cbd207ffc"} Mar 12 16:18:34 crc kubenswrapper[4687]: I0312 16:18:34.520990 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7b5ps" podStartSLOduration=2.419995469 podStartE2EDuration="7.520968787s" podCreationTimestamp="2026-03-12 16:18:27 +0000 UTC" firstStartedPulling="2026-03-12 16:18:28.407023845 +0000 UTC m=+957.370986189" lastFinishedPulling="2026-03-12 16:18:33.507997173 +0000 UTC m=+962.471959507" observedRunningTime="2026-03-12 16:18:34.520487063 +0000 UTC m=+963.484449447" watchObservedRunningTime="2026-03-12 16:18:34.520968787 +0000 UTC m=+963.484931131" Mar 12 16:18:37 crc kubenswrapper[4687]: I0312 16:18:37.909070 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 16:18:38 crc kubenswrapper[4687]: I0312 16:18:38.199147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:38 crc kubenswrapper[4687]: I0312 16:18:38.199496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:38 crc kubenswrapper[4687]: I0312 16:18:38.204578 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:38 crc kubenswrapper[4687]: I0312 16:18:38.537804 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:18:38 crc kubenswrapper[4687]: I0312 16:18:38.603144 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5fcb6d6857-pdg8m"] Mar 12 16:18:44 crc kubenswrapper[4687]: I0312 16:18:44.121653 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:18:44 crc kubenswrapper[4687]: I0312 16:18:44.122170 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:18:47 crc kubenswrapper[4687]: I0312 16:18:47.801387 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 16:19:00 crc kubenswrapper[4687]: I0312 16:19:00.164526 4687 scope.go:117] "RemoveContainer" containerID="deafa1508404f6ad3f2053e701f89af1a973ad59e59ac5b836a915e9a2b75f22" Mar 12 16:19:03 crc kubenswrapper[4687]: I0312 16:19:03.646956 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5fcb6d6857-pdg8m" podUID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" containerName="console" containerID="cri-o://cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d" gracePeriod=15 Mar 12 16:19:03 crc kubenswrapper[4687]: I0312 16:19:03.837833 4687 patch_prober.go:28] interesting pod/console-5fcb6d6857-pdg8m container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.96:8443/health\": dial tcp 10.217.0.96:8443: connect: connection refused" start-of-body= Mar 12 16:19:03 crc kubenswrapper[4687]: I0312 16:19:03.838241 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5fcb6d6857-pdg8m" podUID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.96:8443/health\": dial tcp 10.217.0.96:8443: connect: connection refused" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.045204 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5fcb6d6857-pdg8m_b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e/console/0.log" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.045267 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.196732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-service-ca\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197302 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-trusted-ca-bundle\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-serving-cert\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197677 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-oauth-config\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197812 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-config\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197960 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-oauth-serving-cert\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.198104 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22jkg\" (UniqueName: \"kubernetes.io/projected/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-kube-api-access-22jkg\") pod \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\" (UID: \"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e\") " Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197564 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-service-ca" (OuterVolumeSpecName: "service-ca") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.197882 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.198315 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-config" (OuterVolumeSpecName: "console-config") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.198436 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.199142 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.199238 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.199318 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.199469 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.204872 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-kube-api-access-22jkg" (OuterVolumeSpecName: "kube-api-access-22jkg") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "kube-api-access-22jkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.207532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.213580 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" (UID: "b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.300982 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22jkg\" (UniqueName: \"kubernetes.io/projected/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-kube-api-access-22jkg\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.301024 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.301037 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.764960 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5fcb6d6857-pdg8m_b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e/console/0.log" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.765011 4687 generic.go:334] "Generic (PLEG): container finished" podID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" containerID="cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d" exitCode=2 Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.765048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fcb6d6857-pdg8m" event={"ID":"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e","Type":"ContainerDied","Data":"cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d"} Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.765074 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5fcb6d6857-pdg8m" event={"ID":"b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e","Type":"ContainerDied","Data":"80f58c21411916dc86cbb8960759f504314fa10b733fed8b18d2012ee361a68a"} Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.765090 4687 scope.go:117] "RemoveContainer" containerID="cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.765221 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5fcb6d6857-pdg8m" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.795472 4687 scope.go:117] "RemoveContainer" containerID="cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d" Mar 12 16:19:04 crc kubenswrapper[4687]: E0312 16:19:04.798716 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d\": container with ID starting with cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d not found: ID does not exist" containerID="cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.798764 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d"} err="failed to get container status \"cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d\": rpc error: code = NotFound desc = could not find container \"cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d\": container with ID starting with cd728539fa0e3c86e0fb22611bde544ba78a3f4b9ad603b977a8408695836c5d not found: ID does not exist" Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.811411 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5fcb6d6857-pdg8m"] Mar 12 16:19:04 crc kubenswrapper[4687]: I0312 16:19:04.822068 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5fcb6d6857-pdg8m"] Mar 12 16:19:05 crc kubenswrapper[4687]: I0312 16:19:05.741084 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" path="/var/lib/kubelet/pods/b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e/volumes" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.449326 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64"] Mar 12 16:19:06 crc kubenswrapper[4687]: E0312 16:19:06.450138 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" containerName="console" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.450233 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" containerName="console" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.450437 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31d9fa8-bcfd-4ba8-b6b2-b53a297af27e" containerName="console" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.451402 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.453070 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.463239 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64"] Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.533608 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.533670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.533710 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5mq6\" (UniqueName: \"kubernetes.io/projected/1d8d85c9-7b46-4030-8930-662b9b0012a3-kube-api-access-q5mq6\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.635579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.635632 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.635673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5mq6\" (UniqueName: \"kubernetes.io/projected/1d8d85c9-7b46-4030-8930-662b9b0012a3-kube-api-access-q5mq6\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.636252 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.636382 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.657996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5mq6\" (UniqueName: \"kubernetes.io/projected/1d8d85c9-7b46-4030-8930-662b9b0012a3-kube-api-access-q5mq6\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:06 crc kubenswrapper[4687]: I0312 16:19:06.771748 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:07 crc kubenswrapper[4687]: I0312 16:19:07.036014 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64"] Mar 12 16:19:07 crc kubenswrapper[4687]: I0312 16:19:07.798566 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerID="ce3b781e7663b8bc20eaf1b0c372c75a00117954aed63772d753c7f067c96e94" exitCode=0 Mar 12 16:19:07 crc kubenswrapper[4687]: I0312 16:19:07.798883 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" event={"ID":"1d8d85c9-7b46-4030-8930-662b9b0012a3","Type":"ContainerDied","Data":"ce3b781e7663b8bc20eaf1b0c372c75a00117954aed63772d753c7f067c96e94"} Mar 12 16:19:07 crc kubenswrapper[4687]: I0312 16:19:07.798913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" event={"ID":"1d8d85c9-7b46-4030-8930-662b9b0012a3","Type":"ContainerStarted","Data":"6cd37b06780e67f2a8c40b72226ee2fbbdf68551594d2a1cec1fbc2f3b5bb3eb"} Mar 12 16:19:09 crc kubenswrapper[4687]: I0312 16:19:09.816339 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" event={"ID":"1d8d85c9-7b46-4030-8930-662b9b0012a3","Type":"ContainerStarted","Data":"c1ecbac91dac2a8f7c93e21eb1b499b14baef032ec3c0d2c2d046cfb814daa4b"} Mar 12 16:19:10 crc kubenswrapper[4687]: I0312 16:19:10.831568 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerID="c1ecbac91dac2a8f7c93e21eb1b499b14baef032ec3c0d2c2d046cfb814daa4b" exitCode=0 Mar 12 16:19:10 crc kubenswrapper[4687]: I0312 16:19:10.831886 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" event={"ID":"1d8d85c9-7b46-4030-8930-662b9b0012a3","Type":"ContainerDied","Data":"c1ecbac91dac2a8f7c93e21eb1b499b14baef032ec3c0d2c2d046cfb814daa4b"} Mar 12 16:19:11 crc kubenswrapper[4687]: I0312 16:19:11.842585 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerID="65e0a01091900ff8e0d013fb72d75095ede7bc5b95e6aa110c2346c40e62a897" exitCode=0 Mar 12 16:19:11 crc kubenswrapper[4687]: I0312 16:19:11.842714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" event={"ID":"1d8d85c9-7b46-4030-8930-662b9b0012a3","Type":"ContainerDied","Data":"65e0a01091900ff8e0d013fb72d75095ede7bc5b95e6aa110c2346c40e62a897"} Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.139646 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.241054 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-bundle\") pod \"1d8d85c9-7b46-4030-8930-662b9b0012a3\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.241173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5mq6\" (UniqueName: \"kubernetes.io/projected/1d8d85c9-7b46-4030-8930-662b9b0012a3-kube-api-access-q5mq6\") pod \"1d8d85c9-7b46-4030-8930-662b9b0012a3\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.241190 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-util\") pod \"1d8d85c9-7b46-4030-8930-662b9b0012a3\" (UID: \"1d8d85c9-7b46-4030-8930-662b9b0012a3\") " Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.242880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-bundle" (OuterVolumeSpecName: "bundle") pod "1d8d85c9-7b46-4030-8930-662b9b0012a3" (UID: "1d8d85c9-7b46-4030-8930-662b9b0012a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.246734 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8d85c9-7b46-4030-8930-662b9b0012a3-kube-api-access-q5mq6" (OuterVolumeSpecName: "kube-api-access-q5mq6") pod "1d8d85c9-7b46-4030-8930-662b9b0012a3" (UID: "1d8d85c9-7b46-4030-8930-662b9b0012a3"). InnerVolumeSpecName "kube-api-access-q5mq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.250804 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-util" (OuterVolumeSpecName: "util") pod "1d8d85c9-7b46-4030-8930-662b9b0012a3" (UID: "1d8d85c9-7b46-4030-8930-662b9b0012a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.342652 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5mq6\" (UniqueName: \"kubernetes.io/projected/1d8d85c9-7b46-4030-8930-662b9b0012a3-kube-api-access-q5mq6\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.342693 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.342706 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d8d85c9-7b46-4030-8930-662b9b0012a3-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.859957 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" event={"ID":"1d8d85c9-7b46-4030-8930-662b9b0012a3","Type":"ContainerDied","Data":"6cd37b06780e67f2a8c40b72226ee2fbbdf68551594d2a1cec1fbc2f3b5bb3eb"} Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.860014 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64" Mar 12 16:19:13 crc kubenswrapper[4687]: I0312 16:19:13.860033 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cd37b06780e67f2a8c40b72226ee2fbbdf68551594d2a1cec1fbc2f3b5bb3eb" Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.122236 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.122313 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.122399 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.123107 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd9807f39bd0fa78ee3cf947dc53d693272e6c942fe244d8ebc4e3c782d477d9"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.123174 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://dd9807f39bd0fa78ee3cf947dc53d693272e6c942fe244d8ebc4e3c782d477d9" gracePeriod=600 Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.871337 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="dd9807f39bd0fa78ee3cf947dc53d693272e6c942fe244d8ebc4e3c782d477d9" exitCode=0 Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.871393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"dd9807f39bd0fa78ee3cf947dc53d693272e6c942fe244d8ebc4e3c782d477d9"} Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.871873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"f1ffdc3800e5c81c03048cac69c6e0d74a9a699fce31dea6098c57fea14e8d99"} Mar 12 16:19:14 crc kubenswrapper[4687]: I0312 16:19:14.871896 4687 scope.go:117] "RemoveContainer" containerID="8e641712db1a1962856f1bc8834ef5662c58059d18ba81c47f6d78c93a0e3f14" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.188127 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-546598f745-bbcrq"] Mar 12 16:19:21 crc kubenswrapper[4687]: E0312 16:19:21.189002 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="util" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.189018 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="util" Mar 12 16:19:21 crc kubenswrapper[4687]: E0312 16:19:21.189041 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="extract" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.189049 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="extract" Mar 12 16:19:21 crc kubenswrapper[4687]: E0312 16:19:21.189070 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="pull" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.189079 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="pull" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.189243 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8d85c9-7b46-4030-8930-662b9b0012a3" containerName="extract" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.189957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.192574 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.192626 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.192876 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.193047 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jvnmp" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.195121 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.206300 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-546598f745-bbcrq"] Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.268217 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gn2\" (UniqueName: \"kubernetes.io/projected/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-kube-api-access-h4gn2\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.268335 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-apiservice-cert\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.268375 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-webhook-cert\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.369840 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-apiservice-cert\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.369889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-webhook-cert\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.369972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gn2\" (UniqueName: \"kubernetes.io/projected/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-kube-api-access-h4gn2\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.376033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-webhook-cert\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.388560 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-apiservice-cert\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.393882 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gn2\" (UniqueName: \"kubernetes.io/projected/0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a-kube-api-access-h4gn2\") pod \"metallb-operator-controller-manager-546598f745-bbcrq\" (UID: \"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a\") " pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.429566 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw"] Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.430673 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.433702 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.438562 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.439064 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lxxmx" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.448657 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw"] Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.510838 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.573728 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-apiservice-cert\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.573964 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7lz\" (UniqueName: \"kubernetes.io/projected/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-kube-api-access-fm7lz\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.574117 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-webhook-cert\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.679252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7lz\" (UniqueName: \"kubernetes.io/projected/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-kube-api-access-fm7lz\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.679447 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-webhook-cert\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.679581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-apiservice-cert\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.703807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-apiservice-cert\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.710249 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7lz\" (UniqueName: \"kubernetes.io/projected/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-kube-api-access-fm7lz\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.717009 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69e1152d-3280-4ab7-81dd-dc83f0daa3dc-webhook-cert\") pod \"metallb-operator-webhook-server-b584c959d-dtlbw\" (UID: \"69e1152d-3280-4ab7-81dd-dc83f0daa3dc\") " pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.747816 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.773208 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-546598f745-bbcrq"] Mar 12 16:19:21 crc kubenswrapper[4687]: I0312 16:19:21.957591 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" event={"ID":"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a","Type":"ContainerStarted","Data":"b6ef06c8787d63f09f360ceb7ca0868bf7981042a1df0dac90d4550fd1c2370a"} Mar 12 16:19:22 crc kubenswrapper[4687]: W0312 16:19:22.189921 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e1152d_3280_4ab7_81dd_dc83f0daa3dc.slice/crio-45f758db425f37edd1f71b673190df636a614c8acbabbc090502e7a886d4daab WatchSource:0}: Error finding container 45f758db425f37edd1f71b673190df636a614c8acbabbc090502e7a886d4daab: Status 404 returned error can't find the container with id 45f758db425f37edd1f71b673190df636a614c8acbabbc090502e7a886d4daab Mar 12 16:19:22 crc kubenswrapper[4687]: I0312 16:19:22.191744 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw"] Mar 12 16:19:22 crc kubenswrapper[4687]: I0312 16:19:22.964410 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" event={"ID":"69e1152d-3280-4ab7-81dd-dc83f0daa3dc","Type":"ContainerStarted","Data":"45f758db425f37edd1f71b673190df636a614c8acbabbc090502e7a886d4daab"} Mar 12 16:19:25 crc kubenswrapper[4687]: I0312 16:19:25.986173 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" event={"ID":"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a","Type":"ContainerStarted","Data":"8757a248e2ac7124948888fc8a6f0c041c843a8a4282aaba501b33958ba5d424"} Mar 12 16:19:25 crc kubenswrapper[4687]: I0312 16:19:25.986766 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:19:26 crc kubenswrapper[4687]: I0312 16:19:26.013024 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" podStartSLOduration=1.4774815270000001 podStartE2EDuration="5.013002231s" podCreationTimestamp="2026-03-12 16:19:21 +0000 UTC" firstStartedPulling="2026-03-12 16:19:21.795159013 +0000 UTC m=+1010.759121357" lastFinishedPulling="2026-03-12 16:19:25.330679717 +0000 UTC m=+1014.294642061" observedRunningTime="2026-03-12 16:19:26.003735492 +0000 UTC m=+1014.967697836" watchObservedRunningTime="2026-03-12 16:19:26.013002231 +0000 UTC m=+1014.976964595" Mar 12 16:19:28 crc kubenswrapper[4687]: I0312 16:19:28.004307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" event={"ID":"69e1152d-3280-4ab7-81dd-dc83f0daa3dc","Type":"ContainerStarted","Data":"93ec49e0b993f5e4105fdb7cf5b2578b54c63020aedf051f1ad7f6a94717e934"} Mar 12 16:19:28 crc kubenswrapper[4687]: I0312 16:19:28.004726 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:19:28 crc kubenswrapper[4687]: I0312 16:19:28.026049 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podStartSLOduration=1.7965559660000001 podStartE2EDuration="7.026033583s" podCreationTimestamp="2026-03-12 16:19:21 +0000 UTC" firstStartedPulling="2026-03-12 16:19:22.192977371 +0000 UTC m=+1011.156939715" lastFinishedPulling="2026-03-12 16:19:27.422454988 +0000 UTC m=+1016.386417332" observedRunningTime="2026-03-12 16:19:28.024148212 +0000 UTC m=+1016.988110556" watchObservedRunningTime="2026-03-12 16:19:28.026033583 +0000 UTC m=+1016.989995927" Mar 12 16:19:41 crc kubenswrapper[4687]: I0312 16:19:41.753260 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.160190 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555540-q5b7v"] Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.161937 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.163735 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.164628 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.165960 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555540-q5b7v"] Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.168096 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.321313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gq4x\" (UniqueName: \"kubernetes.io/projected/61faf509-0e75-40f1-b755-c10c266408a9-kube-api-access-8gq4x\") pod \"auto-csr-approver-29555540-q5b7v\" (UID: \"61faf509-0e75-40f1-b755-c10c266408a9\") " pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.423700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gq4x\" (UniqueName: \"kubernetes.io/projected/61faf509-0e75-40f1-b755-c10c266408a9-kube-api-access-8gq4x\") pod \"auto-csr-approver-29555540-q5b7v\" (UID: \"61faf509-0e75-40f1-b755-c10c266408a9\") " pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.456975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gq4x\" (UniqueName: \"kubernetes.io/projected/61faf509-0e75-40f1-b755-c10c266408a9-kube-api-access-8gq4x\") pod \"auto-csr-approver-29555540-q5b7v\" (UID: \"61faf509-0e75-40f1-b755-c10c266408a9\") " pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.485739 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.904708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555540-q5b7v"] Mar 12 16:20:00 crc kubenswrapper[4687]: I0312 16:20:00.915769 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:20:01 crc kubenswrapper[4687]: I0312 16:20:01.299236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" event={"ID":"61faf509-0e75-40f1-b755-c10c266408a9","Type":"ContainerStarted","Data":"a307ca32e163a5d60dd39c8cb7b9d3ead90ecba0de0eb79f9af9c2d6633d0d83"} Mar 12 16:20:01 crc kubenswrapper[4687]: I0312 16:20:01.515121 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.185705 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f"] Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.186898 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.193741 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mwrb7" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.202510 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.204786 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4xd8n"] Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.223434 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.230041 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.230069 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.255535 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f"] Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.257744 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72f79\" (UniqueName: \"kubernetes.io/projected/89dd944c-557b-4060-914f-c5287ed954bb-kube-api-access-72f79\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.257787 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89dd944c-557b-4060-914f-c5287ed954bb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.294544 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jrgss"] Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.298685 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.301468 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.301648 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lnh26" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.301678 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.301748 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.328350 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-hq4lb"] Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.329531 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.334285 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.348109 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-hq4lb"] Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1003c23c-a0cb-4878-8399-d7b435084227-frr-startup\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358699 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-frr-conf\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358737 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwhq\" (UniqueName: \"kubernetes.io/projected/1003c23c-a0cb-4878-8399-d7b435084227-kube-api-access-rhwhq\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358763 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-frr-sockets\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358785 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72f79\" (UniqueName: \"kubernetes.io/projected/89dd944c-557b-4060-914f-c5287ed954bb-kube-api-access-72f79\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358811 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89dd944c-557b-4060-914f-c5287ed954bb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1003c23c-a0cb-4878-8399-d7b435084227-metrics-certs\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358866 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-metrics\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.358897 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-reloader\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: E0312 16:20:02.359230 4687 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 12 16:20:02 crc kubenswrapper[4687]: E0312 16:20:02.359277 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89dd944c-557b-4060-914f-c5287ed954bb-cert podName:89dd944c-557b-4060-914f-c5287ed954bb nodeName:}" failed. No retries permitted until 2026-03-12 16:20:02.859261933 +0000 UTC m=+1051.823224277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89dd944c-557b-4060-914f-c5287ed954bb-cert") pod "frr-k8s-webhook-server-bcc4b6f68-qgv9f" (UID: "89dd944c-557b-4060-914f-c5287ed954bb") : secret "frr-k8s-webhook-server-cert" not found Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.398397 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72f79\" (UniqueName: \"kubernetes.io/projected/89dd944c-557b-4060-914f-c5287ed954bb-kube-api-access-72f79\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.460778 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metallb-excludel2\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.460853 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-frr-conf\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.460896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwhq\" (UniqueName: \"kubernetes.io/projected/1003c23c-a0cb-4878-8399-d7b435084227-kube-api-access-rhwhq\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.460912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-frr-sockets\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.460952 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80778679-d1d9-4307-990d-7e79bf7ce3f3-cert\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.460988 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1003c23c-a0cb-4878-8399-d7b435084227-metrics-certs\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-metrics\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80778679-d1d9-4307-990d-7e79bf7ce3f3-metrics-certs\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-reloader\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461068 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metrics-certs\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvr7\" (UniqueName: \"kubernetes.io/projected/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-kube-api-access-njvr7\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461120 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1003c23c-a0cb-4878-8399-d7b435084227-frr-startup\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461139 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461161 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlvw\" (UniqueName: \"kubernetes.io/projected/80778679-d1d9-4307-990d-7e79bf7ce3f3-kube-api-access-7hlvw\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.461558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-frr-conf\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.462006 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-frr-sockets\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.462617 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-metrics\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.462805 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1003c23c-a0cb-4878-8399-d7b435084227-reloader\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.463485 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1003c23c-a0cb-4878-8399-d7b435084227-frr-startup\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.465116 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1003c23c-a0cb-4878-8399-d7b435084227-metrics-certs\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.486107 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwhq\" (UniqueName: \"kubernetes.io/projected/1003c23c-a0cb-4878-8399-d7b435084227-kube-api-access-rhwhq\") pod \"frr-k8s-4xd8n\" (UID: \"1003c23c-a0cb-4878-8399-d7b435084227\") " pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.556184 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlvw\" (UniqueName: \"kubernetes.io/projected/80778679-d1d9-4307-990d-7e79bf7ce3f3-kube-api-access-7hlvw\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metallb-excludel2\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80778679-d1d9-4307-990d-7e79bf7ce3f3-cert\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80778679-d1d9-4307-990d-7e79bf7ce3f3-metrics-certs\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metrics-certs\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.562317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvr7\" (UniqueName: \"kubernetes.io/projected/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-kube-api-access-njvr7\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: E0312 16:20:02.562765 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 16:20:02 crc kubenswrapper[4687]: E0312 16:20:02.562820 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist podName:00c4362c-6a07-47c7-a60a-bbaf5b9f0260 nodeName:}" failed. No retries permitted until 2026-03-12 16:20:03.062803996 +0000 UTC m=+1052.026766340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist") pod "speaker-jrgss" (UID: "00c4362c-6a07-47c7-a60a-bbaf5b9f0260") : secret "metallb-memberlist" not found Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.564062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metallb-excludel2\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: E0312 16:20:02.564692 4687 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 12 16:20:02 crc kubenswrapper[4687]: E0312 16:20:02.564735 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metrics-certs podName:00c4362c-6a07-47c7-a60a-bbaf5b9f0260 nodeName:}" failed. No retries permitted until 2026-03-12 16:20:03.064723639 +0000 UTC m=+1052.028685983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metrics-certs") pod "speaker-jrgss" (UID: "00c4362c-6a07-47c7-a60a-bbaf5b9f0260") : secret "speaker-certs-secret" not found Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.566030 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.582046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlvw\" (UniqueName: \"kubernetes.io/projected/80778679-d1d9-4307-990d-7e79bf7ce3f3-kube-api-access-7hlvw\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.584337 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80778679-d1d9-4307-990d-7e79bf7ce3f3-metrics-certs\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.584423 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvr7\" (UniqueName: \"kubernetes.io/projected/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-kube-api-access-njvr7\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.586962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80778679-d1d9-4307-990d-7e79bf7ce3f3-cert\") pod \"controller-7bb4cc7c98-hq4lb\" (UID: \"80778679-d1d9-4307-990d-7e79bf7ce3f3\") " pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.647561 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.874397 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89dd944c-557b-4060-914f-c5287ed954bb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:02 crc kubenswrapper[4687]: I0312 16:20:02.883382 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89dd944c-557b-4060-914f-c5287ed954bb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qgv9f\" (UID: \"89dd944c-557b-4060-914f-c5287ed954bb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.078611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metrics-certs\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.078682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:03 crc kubenswrapper[4687]: E0312 16:20:03.078889 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 16:20:03 crc kubenswrapper[4687]: E0312 16:20:03.078969 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist podName:00c4362c-6a07-47c7-a60a-bbaf5b9f0260 nodeName:}" failed. No retries permitted until 2026-03-12 16:20:04.078943509 +0000 UTC m=+1053.042905863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist") pod "speaker-jrgss" (UID: "00c4362c-6a07-47c7-a60a-bbaf5b9f0260") : secret "metallb-memberlist" not found Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.083893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-metrics-certs\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.117389 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-hq4lb"] Mar 12 16:20:03 crc kubenswrapper[4687]: W0312 16:20:03.124601 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80778679_d1d9_4307_990d_7e79bf7ce3f3.slice/crio-95b4d582c88fdb037026c8ac53ea7e83e82cf591acaacccaa25ef05c35b643d9 WatchSource:0}: Error finding container 95b4d582c88fdb037026c8ac53ea7e83e82cf591acaacccaa25ef05c35b643d9: Status 404 returned error can't find the container with id 95b4d582c88fdb037026c8ac53ea7e83e82cf591acaacccaa25ef05c35b643d9 Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.131174 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.323367 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hq4lb" event={"ID":"80778679-d1d9-4307-990d-7e79bf7ce3f3","Type":"ContainerStarted","Data":"95b4d582c88fdb037026c8ac53ea7e83e82cf591acaacccaa25ef05c35b643d9"} Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.329677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" event={"ID":"61faf509-0e75-40f1-b755-c10c266408a9","Type":"ContainerStarted","Data":"50b0d7479586b3b72e803d123e9197a37f1a4b1b0f4222e56c471a06923b4a65"} Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.337263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"a85a6da0c475487274d401ae2858c3c17d83299dc92dea63b20cbdd036d811a3"} Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.347762 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" podStartSLOduration=1.6286125139999998 podStartE2EDuration="3.347741832s" podCreationTimestamp="2026-03-12 16:20:00 +0000 UTC" firstStartedPulling="2026-03-12 16:20:00.915457698 +0000 UTC m=+1049.879420042" lastFinishedPulling="2026-03-12 16:20:02.634587016 +0000 UTC m=+1051.598549360" observedRunningTime="2026-03-12 16:20:03.341485729 +0000 UTC m=+1052.305448073" watchObservedRunningTime="2026-03-12 16:20:03.347741832 +0000 UTC m=+1052.311704176" Mar 12 16:20:03 crc kubenswrapper[4687]: W0312 16:20:03.774162 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89dd944c_557b_4060_914f_c5287ed954bb.slice/crio-bf7a99be14b0a01232443b4fac849c4ce34758748623fbc688fb5cfe56d7478f WatchSource:0}: Error finding container bf7a99be14b0a01232443b4fac849c4ce34758748623fbc688fb5cfe56d7478f: Status 404 returned error can't find the container with id bf7a99be14b0a01232443b4fac849c4ce34758748623fbc688fb5cfe56d7478f Mar 12 16:20:03 crc kubenswrapper[4687]: I0312 16:20:03.776933 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f"] Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.164867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.176972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/00c4362c-6a07-47c7-a60a-bbaf5b9f0260-memberlist\") pod \"speaker-jrgss\" (UID: \"00c4362c-6a07-47c7-a60a-bbaf5b9f0260\") " pod="metallb-system/speaker-jrgss" Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.344888 4687 generic.go:334] "Generic (PLEG): container finished" podID="61faf509-0e75-40f1-b755-c10c266408a9" containerID="50b0d7479586b3b72e803d123e9197a37f1a4b1b0f4222e56c471a06923b4a65" exitCode=0 Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.344959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" event={"ID":"61faf509-0e75-40f1-b755-c10c266408a9","Type":"ContainerDied","Data":"50b0d7479586b3b72e803d123e9197a37f1a4b1b0f4222e56c471a06923b4a65"} Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.346787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" event={"ID":"89dd944c-557b-4060-914f-c5287ed954bb","Type":"ContainerStarted","Data":"bf7a99be14b0a01232443b4fac849c4ce34758748623fbc688fb5cfe56d7478f"} Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.348420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hq4lb" event={"ID":"80778679-d1d9-4307-990d-7e79bf7ce3f3","Type":"ContainerStarted","Data":"94e22e8b4a6dfe8ae670ea2edae4526a8dea9555e41013aa4614fe0d5cdf5407"} Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.348457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hq4lb" event={"ID":"80778679-d1d9-4307-990d-7e79bf7ce3f3","Type":"ContainerStarted","Data":"3f496599538811d3982293ecbd6894138a2b44720f612fd4684a48035a4468fc"} Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.348580 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.410169 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podStartSLOduration=2.410144329 podStartE2EDuration="2.410144329s" podCreationTimestamp="2026-03-12 16:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:20:04.397907052 +0000 UTC m=+1053.361869416" watchObservedRunningTime="2026-03-12 16:20:04.410144329 +0000 UTC m=+1053.374106683" Mar 12 16:20:04 crc kubenswrapper[4687]: I0312 16:20:04.428733 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jrgss" Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.357466 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jrgss" event={"ID":"00c4362c-6a07-47c7-a60a-bbaf5b9f0260","Type":"ContainerStarted","Data":"3dcdd0ab79d4a5e5d872ea4643659d0c5506d57d2552579211811580df902f38"} Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.357814 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jrgss" event={"ID":"00c4362c-6a07-47c7-a60a-bbaf5b9f0260","Type":"ContainerStarted","Data":"ac5076fe2d3cb45cf61cdad57a50156afbd41e29f4164ee794084315a8bc3f4e"} Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.357835 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jrgss" event={"ID":"00c4362c-6a07-47c7-a60a-bbaf5b9f0260","Type":"ContainerStarted","Data":"4128edc3f4590762cb02af765d3228ddf5251b16ffb71fd18c61988584fba10a"} Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.358034 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jrgss" Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.374144 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jrgss" podStartSLOduration=3.374127482 podStartE2EDuration="3.374127482s" podCreationTimestamp="2026-03-12 16:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:20:05.371889541 +0000 UTC m=+1054.335851885" watchObservedRunningTime="2026-03-12 16:20:05.374127482 +0000 UTC m=+1054.338089826" Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.809275 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.894268 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gq4x\" (UniqueName: \"kubernetes.io/projected/61faf509-0e75-40f1-b755-c10c266408a9-kube-api-access-8gq4x\") pod \"61faf509-0e75-40f1-b755-c10c266408a9\" (UID: \"61faf509-0e75-40f1-b755-c10c266408a9\") " Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.907840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61faf509-0e75-40f1-b755-c10c266408a9-kube-api-access-8gq4x" (OuterVolumeSpecName: "kube-api-access-8gq4x") pod "61faf509-0e75-40f1-b755-c10c266408a9" (UID: "61faf509-0e75-40f1-b755-c10c266408a9"). InnerVolumeSpecName "kube-api-access-8gq4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:20:05 crc kubenswrapper[4687]: I0312 16:20:05.997047 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gq4x\" (UniqueName: \"kubernetes.io/projected/61faf509-0e75-40f1-b755-c10c266408a9-kube-api-access-8gq4x\") on node \"crc\" DevicePath \"\"" Mar 12 16:20:06 crc kubenswrapper[4687]: I0312 16:20:06.370457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" event={"ID":"61faf509-0e75-40f1-b755-c10c266408a9","Type":"ContainerDied","Data":"a307ca32e163a5d60dd39c8cb7b9d3ead90ecba0de0eb79f9af9c2d6633d0d83"} Mar 12 16:20:06 crc kubenswrapper[4687]: I0312 16:20:06.370495 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555540-q5b7v" Mar 12 16:20:06 crc kubenswrapper[4687]: I0312 16:20:06.370512 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a307ca32e163a5d60dd39c8cb7b9d3ead90ecba0de0eb79f9af9c2d6633d0d83" Mar 12 16:20:06 crc kubenswrapper[4687]: I0312 16:20:06.444694 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555534-vs8t2"] Mar 12 16:20:06 crc kubenswrapper[4687]: I0312 16:20:06.451203 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555534-vs8t2"] Mar 12 16:20:07 crc kubenswrapper[4687]: I0312 16:20:07.742158 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7909e1ad-45be-45cc-83d9-8efd939918c8" path="/var/lib/kubelet/pods/7909e1ad-45be-45cc-83d9-8efd939918c8/volumes" Mar 12 16:20:13 crc kubenswrapper[4687]: I0312 16:20:13.441389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" event={"ID":"89dd944c-557b-4060-914f-c5287ed954bb","Type":"ContainerStarted","Data":"3ffdfe61a826bdcdc04ddffe11be4b0e7037d5e5ce7e362a523db017751f50f9"} Mar 12 16:20:13 crc kubenswrapper[4687]: I0312 16:20:13.441939 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:13 crc kubenswrapper[4687]: I0312 16:20:13.443604 4687 generic.go:334] "Generic (PLEG): container finished" podID="1003c23c-a0cb-4878-8399-d7b435084227" containerID="c3143e70bbec2d4a6db92c497ddc71ff5419e4b1cd0262fe62c296b9a9e54702" exitCode=0 Mar 12 16:20:13 crc kubenswrapper[4687]: I0312 16:20:13.443647 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerDied","Data":"c3143e70bbec2d4a6db92c497ddc71ff5419e4b1cd0262fe62c296b9a9e54702"} Mar 12 16:20:13 crc kubenswrapper[4687]: I0312 16:20:13.465070 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" podStartSLOduration=3.19526876 podStartE2EDuration="11.465036452s" podCreationTimestamp="2026-03-12 16:20:02 +0000 UTC" firstStartedPulling="2026-03-12 16:20:03.77631499 +0000 UTC m=+1052.740277334" lastFinishedPulling="2026-03-12 16:20:12.046082682 +0000 UTC m=+1061.010045026" observedRunningTime="2026-03-12 16:20:13.460206288 +0000 UTC m=+1062.424168632" watchObservedRunningTime="2026-03-12 16:20:13.465036452 +0000 UTC m=+1062.428998796" Mar 12 16:20:14 crc kubenswrapper[4687]: I0312 16:20:14.433110 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jrgss" Mar 12 16:20:14 crc kubenswrapper[4687]: I0312 16:20:14.452052 4687 generic.go:334] "Generic (PLEG): container finished" podID="1003c23c-a0cb-4878-8399-d7b435084227" containerID="d16c193750345a31760bd63d967f4a379e7a20b55bccbb84313912c9bffa2ee6" exitCode=0 Mar 12 16:20:14 crc kubenswrapper[4687]: I0312 16:20:14.452196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerDied","Data":"d16c193750345a31760bd63d967f4a379e7a20b55bccbb84313912c9bffa2ee6"} Mar 12 16:20:15 crc kubenswrapper[4687]: I0312 16:20:15.463412 4687 generic.go:334] "Generic (PLEG): container finished" podID="1003c23c-a0cb-4878-8399-d7b435084227" containerID="b0c3488005b2781f4e5396255b5449a10abd84c89d759992a784b9113117fa43" exitCode=0 Mar 12 16:20:15 crc kubenswrapper[4687]: I0312 16:20:15.463528 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerDied","Data":"b0c3488005b2781f4e5396255b5449a10abd84c89d759992a784b9113117fa43"} Mar 12 16:20:16 crc kubenswrapper[4687]: I0312 16:20:16.474906 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"aa6c5bd39666443716b7a5791363ab05ea37c53a3c1f3b698fde879ba10f811c"} Mar 12 16:20:16 crc kubenswrapper[4687]: I0312 16:20:16.475563 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"7ca89da48a9e57ec6096877a6946787424d9709254ce22a422a59e1336a46840"} Mar 12 16:20:16 crc kubenswrapper[4687]: I0312 16:20:16.475583 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"e93a59ed48ea9154ba39e530777064e56a57130443fd0dd3bba6345ee46a232b"} Mar 12 16:20:16 crc kubenswrapper[4687]: I0312 16:20:16.475597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"cea997feac1bbb1cbf2f45f0fee666bc0a3a3c04a749e0d26bb1262825aabd12"} Mar 12 16:20:16 crc kubenswrapper[4687]: I0312 16:20:16.475608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"5d2e02721609584bc7882266cb5d8d1b70ab8951a712b80dde05ac9f08cf07ea"} Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.166753 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d75xg"] Mar 12 16:20:17 crc kubenswrapper[4687]: E0312 16:20:17.167055 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61faf509-0e75-40f1-b755-c10c266408a9" containerName="oc" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.167067 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="61faf509-0e75-40f1-b755-c10c266408a9" containerName="oc" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.167265 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="61faf509-0e75-40f1-b755-c10c266408a9" containerName="oc" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.167825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.171094 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.171297 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hpt6z" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.171310 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.198293 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d75xg"] Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.301472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sp5q\" (UniqueName: \"kubernetes.io/projected/c0b89ad7-cb73-4116-b314-02899f4febf9-kube-api-access-6sp5q\") pod \"openstack-operator-index-d75xg\" (UID: \"c0b89ad7-cb73-4116-b314-02899f4febf9\") " pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.402874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sp5q\" (UniqueName: \"kubernetes.io/projected/c0b89ad7-cb73-4116-b314-02899f4febf9-kube-api-access-6sp5q\") pod \"openstack-operator-index-d75xg\" (UID: \"c0b89ad7-cb73-4116-b314-02899f4febf9\") " pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.422146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sp5q\" (UniqueName: \"kubernetes.io/projected/c0b89ad7-cb73-4116-b314-02899f4febf9-kube-api-access-6sp5q\") pod \"openstack-operator-index-d75xg\" (UID: \"c0b89ad7-cb73-4116-b314-02899f4febf9\") " pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.485756 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:17 crc kubenswrapper[4687]: I0312 16:20:17.891798 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d75xg"] Mar 12 16:20:17 crc kubenswrapper[4687]: W0312 16:20:17.894068 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b89ad7_cb73_4116_b314_02899f4febf9.slice/crio-f4adc2089fa849dadb63ae901d75c0afcbce1ef3c2b2b967b412a8b84fc104b7 WatchSource:0}: Error finding container f4adc2089fa849dadb63ae901d75c0afcbce1ef3c2b2b967b412a8b84fc104b7: Status 404 returned error can't find the container with id f4adc2089fa849dadb63ae901d75c0afcbce1ef3c2b2b967b412a8b84fc104b7 Mar 12 16:20:18 crc kubenswrapper[4687]: I0312 16:20:18.498036 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"dab1c9de9e3aaf4ae370de803e8b1b54c0ffec697f5fb50d9d898f45ce34e0ef"} Mar 12 16:20:18 crc kubenswrapper[4687]: I0312 16:20:18.498659 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:18 crc kubenswrapper[4687]: I0312 16:20:18.499812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d75xg" event={"ID":"c0b89ad7-cb73-4116-b314-02899f4febf9","Type":"ContainerStarted","Data":"f4adc2089fa849dadb63ae901d75c0afcbce1ef3c2b2b967b412a8b84fc104b7"} Mar 12 16:20:18 crc kubenswrapper[4687]: I0312 16:20:18.535020 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4xd8n" podStartSLOduration=7.396695481 podStartE2EDuration="16.534988744s" podCreationTimestamp="2026-03-12 16:20:02 +0000 UTC" firstStartedPulling="2026-03-12 16:20:02.883666354 +0000 UTC m=+1051.847628698" lastFinishedPulling="2026-03-12 16:20:12.021959597 +0000 UTC m=+1060.985921961" observedRunningTime="2026-03-12 16:20:18.522407547 +0000 UTC m=+1067.486369931" watchObservedRunningTime="2026-03-12 16:20:18.534988744 +0000 UTC m=+1067.498951128" Mar 12 16:20:20 crc kubenswrapper[4687]: I0312 16:20:20.519411 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d75xg" event={"ID":"c0b89ad7-cb73-4116-b314-02899f4febf9","Type":"ContainerStarted","Data":"ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2"} Mar 12 16:20:20 crc kubenswrapper[4687]: I0312 16:20:20.534966 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d75xg"] Mar 12 16:20:20 crc kubenswrapper[4687]: I0312 16:20:20.556598 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d75xg" podStartSLOduration=1.2212628620000001 podStartE2EDuration="3.556576722s" podCreationTimestamp="2026-03-12 16:20:17 +0000 UTC" firstStartedPulling="2026-03-12 16:20:17.896849606 +0000 UTC m=+1066.860811970" lastFinishedPulling="2026-03-12 16:20:20.232163476 +0000 UTC m=+1069.196125830" observedRunningTime="2026-03-12 16:20:20.544089058 +0000 UTC m=+1069.508051442" watchObservedRunningTime="2026-03-12 16:20:20.556576722 +0000 UTC m=+1069.520539076" Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.148562 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l6tqh"] Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.150699 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.183968 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l6tqh"] Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.281623 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcwrj\" (UniqueName: \"kubernetes.io/projected/dd381e8d-4f1d-48ef-b8a7-b10f6c97b334-kube-api-access-lcwrj\") pod \"openstack-operator-index-l6tqh\" (UID: \"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334\") " pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.383434 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcwrj\" (UniqueName: \"kubernetes.io/projected/dd381e8d-4f1d-48ef-b8a7-b10f6c97b334-kube-api-access-lcwrj\") pod \"openstack-operator-index-l6tqh\" (UID: \"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334\") " pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.417114 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcwrj\" (UniqueName: \"kubernetes.io/projected/dd381e8d-4f1d-48ef-b8a7-b10f6c97b334-kube-api-access-lcwrj\") pod \"openstack-operator-index-l6tqh\" (UID: \"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334\") " pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:21 crc kubenswrapper[4687]: I0312 16:20:21.486939 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.003990 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l6tqh"] Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.539889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l6tqh" event={"ID":"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334","Type":"ContainerStarted","Data":"740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec"} Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.540166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l6tqh" event={"ID":"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334","Type":"ContainerStarted","Data":"08c99611e37381b31da4f6484904dc058512c6efdbc072071ad1e89117cd8187"} Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.540016 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-d75xg" podUID="c0b89ad7-cb73-4116-b314-02899f4febf9" containerName="registry-server" containerID="cri-o://ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2" gracePeriod=2 Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.557506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.570097 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l6tqh" podStartSLOduration=1.507477952 podStartE2EDuration="1.570078788s" podCreationTimestamp="2026-03-12 16:20:21 +0000 UTC" firstStartedPulling="2026-03-12 16:20:22.025887811 +0000 UTC m=+1070.989850205" lastFinishedPulling="2026-03-12 16:20:22.088488687 +0000 UTC m=+1071.052451041" observedRunningTime="2026-03-12 16:20:22.564621337 +0000 UTC m=+1071.528583721" watchObservedRunningTime="2026-03-12 16:20:22.570078788 +0000 UTC m=+1071.534041132" Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.616506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.661511 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 16:20:22 crc kubenswrapper[4687]: I0312 16:20:22.990753 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.119609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sp5q\" (UniqueName: \"kubernetes.io/projected/c0b89ad7-cb73-4116-b314-02899f4febf9-kube-api-access-6sp5q\") pod \"c0b89ad7-cb73-4116-b314-02899f4febf9\" (UID: \"c0b89ad7-cb73-4116-b314-02899f4febf9\") " Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.125830 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b89ad7-cb73-4116-b314-02899f4febf9-kube-api-access-6sp5q" (OuterVolumeSpecName: "kube-api-access-6sp5q") pod "c0b89ad7-cb73-4116-b314-02899f4febf9" (UID: "c0b89ad7-cb73-4116-b314-02899f4febf9"). InnerVolumeSpecName "kube-api-access-6sp5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.140738 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.220856 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sp5q\" (UniqueName: \"kubernetes.io/projected/c0b89ad7-cb73-4116-b314-02899f4febf9-kube-api-access-6sp5q\") on node \"crc\" DevicePath \"\"" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.549651 4687 generic.go:334] "Generic (PLEG): container finished" podID="c0b89ad7-cb73-4116-b314-02899f4febf9" containerID="ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2" exitCode=0 Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.549700 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d75xg" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.549752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d75xg" event={"ID":"c0b89ad7-cb73-4116-b314-02899f4febf9","Type":"ContainerDied","Data":"ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2"} Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.549848 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d75xg" event={"ID":"c0b89ad7-cb73-4116-b314-02899f4febf9","Type":"ContainerDied","Data":"f4adc2089fa849dadb63ae901d75c0afcbce1ef3c2b2b967b412a8b84fc104b7"} Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.549879 4687 scope.go:117] "RemoveContainer" containerID="ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.573684 4687 scope.go:117] "RemoveContainer" containerID="ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2" Mar 12 16:20:23 crc kubenswrapper[4687]: E0312 16:20:23.574418 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2\": container with ID starting with ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2 not found: ID does not exist" containerID="ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.574502 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2"} err="failed to get container status \"ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2\": rpc error: code = NotFound desc = could not find container \"ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2\": container with ID starting with ea8f2a53c98fa5c7bfac596de054e6a955cb5f99348d21cf9adca152d045ffb2 not found: ID does not exist" Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.595933 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d75xg"] Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.603083 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-d75xg"] Mar 12 16:20:23 crc kubenswrapper[4687]: I0312 16:20:23.744907 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b89ad7-cb73-4116-b314-02899f4febf9" path="/var/lib/kubelet/pods/c0b89ad7-cb73-4116-b314-02899f4febf9/volumes" Mar 12 16:20:31 crc kubenswrapper[4687]: I0312 16:20:31.487111 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:31 crc kubenswrapper[4687]: I0312 16:20:31.487897 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:31 crc kubenswrapper[4687]: I0312 16:20:31.525893 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:31 crc kubenswrapper[4687]: I0312 16:20:31.662161 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 16:20:32 crc kubenswrapper[4687]: I0312 16:20:32.562519 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4xd8n" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.825895 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh"] Mar 12 16:20:37 crc kubenswrapper[4687]: E0312 16:20:37.826733 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b89ad7-cb73-4116-b314-02899f4febf9" containerName="registry-server" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.826745 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b89ad7-cb73-4116-b314-02899f4febf9" containerName="registry-server" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.826928 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b89ad7-cb73-4116-b314-02899f4febf9" containerName="registry-server" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.828186 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.830877 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dqdnn" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.841497 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh"] Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.962340 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nl7z\" (UniqueName: \"kubernetes.io/projected/203d759d-0e28-4cd0-b0d3-a65f82035ef6-kube-api-access-4nl7z\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.962743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-bundle\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:37 crc kubenswrapper[4687]: I0312 16:20:37.962978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-util\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.063912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nl7z\" (UniqueName: \"kubernetes.io/projected/203d759d-0e28-4cd0-b0d3-a65f82035ef6-kube-api-access-4nl7z\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.064027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-bundle\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.064067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-util\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.064604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-bundle\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.064635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-util\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.084428 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nl7z\" (UniqueName: \"kubernetes.io/projected/203d759d-0e28-4cd0-b0d3-a65f82035ef6-kube-api-access-4nl7z\") pod \"b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.155432 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.578580 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh"] Mar 12 16:20:38 crc kubenswrapper[4687]: I0312 16:20:38.677276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" event={"ID":"203d759d-0e28-4cd0-b0d3-a65f82035ef6","Type":"ContainerStarted","Data":"825b21191cc3b983667d2a305e7cede591d9db27de2a583b60d9f929d2bd7f89"} Mar 12 16:20:39 crc kubenswrapper[4687]: I0312 16:20:39.684241 4687 generic.go:334] "Generic (PLEG): container finished" podID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerID="afe4a9dbbc10ae8d37855de67c3534a244a0aa250abca4794a913634d04da400" exitCode=0 Mar 12 16:20:39 crc kubenswrapper[4687]: I0312 16:20:39.684299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" event={"ID":"203d759d-0e28-4cd0-b0d3-a65f82035ef6","Type":"ContainerDied","Data":"afe4a9dbbc10ae8d37855de67c3534a244a0aa250abca4794a913634d04da400"} Mar 12 16:20:40 crc kubenswrapper[4687]: I0312 16:20:40.698044 4687 generic.go:334] "Generic (PLEG): container finished" podID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerID="af4bf15a614387c2f1efddbdc92bef28081e6a9b5b35e1e7a3db817a228f6e46" exitCode=0 Mar 12 16:20:40 crc kubenswrapper[4687]: I0312 16:20:40.698101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" event={"ID":"203d759d-0e28-4cd0-b0d3-a65f82035ef6","Type":"ContainerDied","Data":"af4bf15a614387c2f1efddbdc92bef28081e6a9b5b35e1e7a3db817a228f6e46"} Mar 12 16:20:41 crc kubenswrapper[4687]: I0312 16:20:41.712271 4687 generic.go:334] "Generic (PLEG): container finished" podID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerID="e94135fe8f2570bd131a6ca4a6eeeb2a0bd0744815e007775e84dbe2e166ef7a" exitCode=0 Mar 12 16:20:41 crc kubenswrapper[4687]: I0312 16:20:41.712331 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" event={"ID":"203d759d-0e28-4cd0-b0d3-a65f82035ef6","Type":"ContainerDied","Data":"e94135fe8f2570bd131a6ca4a6eeeb2a0bd0744815e007775e84dbe2e166ef7a"} Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.009903 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.047125 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-bundle\") pod \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.047239 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nl7z\" (UniqueName: \"kubernetes.io/projected/203d759d-0e28-4cd0-b0d3-a65f82035ef6-kube-api-access-4nl7z\") pod \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.047291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-util\") pod \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\" (UID: \"203d759d-0e28-4cd0-b0d3-a65f82035ef6\") " Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.048293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-bundle" (OuterVolumeSpecName: "bundle") pod "203d759d-0e28-4cd0-b0d3-a65f82035ef6" (UID: "203d759d-0e28-4cd0-b0d3-a65f82035ef6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.054672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203d759d-0e28-4cd0-b0d3-a65f82035ef6-kube-api-access-4nl7z" (OuterVolumeSpecName: "kube-api-access-4nl7z") pod "203d759d-0e28-4cd0-b0d3-a65f82035ef6" (UID: "203d759d-0e28-4cd0-b0d3-a65f82035ef6"). InnerVolumeSpecName "kube-api-access-4nl7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.065811 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-util" (OuterVolumeSpecName: "util") pod "203d759d-0e28-4cd0-b0d3-a65f82035ef6" (UID: "203d759d-0e28-4cd0-b0d3-a65f82035ef6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.150166 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.150631 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/203d759d-0e28-4cd0-b0d3-a65f82035ef6-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.150654 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nl7z\" (UniqueName: \"kubernetes.io/projected/203d759d-0e28-4cd0-b0d3-a65f82035ef6-kube-api-access-4nl7z\") on node \"crc\" DevicePath \"\"" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.727515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" event={"ID":"203d759d-0e28-4cd0-b0d3-a65f82035ef6","Type":"ContainerDied","Data":"825b21191cc3b983667d2a305e7cede591d9db27de2a583b60d9f929d2bd7f89"} Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.727552 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="825b21191cc3b983667d2a305e7cede591d9db27de2a583b60d9f929d2bd7f89" Mar 12 16:20:43 crc kubenswrapper[4687]: I0312 16:20:43.727579 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.819043 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-568548c879-hsj4g"] Mar 12 16:20:50 crc kubenswrapper[4687]: E0312 16:20:50.819935 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="extract" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.819972 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="extract" Mar 12 16:20:50 crc kubenswrapper[4687]: E0312 16:20:50.820016 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="util" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.820025 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="util" Mar 12 16:20:50 crc kubenswrapper[4687]: E0312 16:20:50.820042 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="pull" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.820049 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="pull" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.820219 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="203d759d-0e28-4cd0-b0d3-a65f82035ef6" containerName="extract" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.820968 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.822608 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-ms57q" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.843624 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-568548c879-hsj4g"] Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.885751 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsgw\" (UniqueName: \"kubernetes.io/projected/43fc4c76-a11e-4403-81b5-ee741b3c2a63-kube-api-access-ctsgw\") pod \"openstack-operator-controller-init-568548c879-hsj4g\" (UID: \"43fc4c76-a11e-4403-81b5-ee741b3c2a63\") " pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:20:50 crc kubenswrapper[4687]: I0312 16:20:50.986983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsgw\" (UniqueName: \"kubernetes.io/projected/43fc4c76-a11e-4403-81b5-ee741b3c2a63-kube-api-access-ctsgw\") pod \"openstack-operator-controller-init-568548c879-hsj4g\" (UID: \"43fc4c76-a11e-4403-81b5-ee741b3c2a63\") " pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:20:51 crc kubenswrapper[4687]: I0312 16:20:51.040458 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsgw\" (UniqueName: \"kubernetes.io/projected/43fc4c76-a11e-4403-81b5-ee741b3c2a63-kube-api-access-ctsgw\") pod \"openstack-operator-controller-init-568548c879-hsj4g\" (UID: \"43fc4c76-a11e-4403-81b5-ee741b3c2a63\") " pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:20:51 crc kubenswrapper[4687]: I0312 16:20:51.140850 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:20:51 crc kubenswrapper[4687]: I0312 16:20:51.607150 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-568548c879-hsj4g"] Mar 12 16:20:51 crc kubenswrapper[4687]: W0312 16:20:51.615559 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fc4c76_a11e_4403_81b5_ee741b3c2a63.slice/crio-447b9836df8ff8d017f7f9100fac8a335268e6f72964dfb84468cd2f5b0f1be3 WatchSource:0}: Error finding container 447b9836df8ff8d017f7f9100fac8a335268e6f72964dfb84468cd2f5b0f1be3: Status 404 returned error can't find the container with id 447b9836df8ff8d017f7f9100fac8a335268e6f72964dfb84468cd2f5b0f1be3 Mar 12 16:20:51 crc kubenswrapper[4687]: I0312 16:20:51.792204 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" event={"ID":"43fc4c76-a11e-4403-81b5-ee741b3c2a63","Type":"ContainerStarted","Data":"447b9836df8ff8d017f7f9100fac8a335268e6f72964dfb84468cd2f5b0f1be3"} Mar 12 16:20:55 crc kubenswrapper[4687]: I0312 16:20:55.824890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" event={"ID":"43fc4c76-a11e-4403-81b5-ee741b3c2a63","Type":"ContainerStarted","Data":"354797209cfe02dc3f0011a03dec290eba9555e6c28177d4f4df6e4f0a582f8f"} Mar 12 16:20:55 crc kubenswrapper[4687]: I0312 16:20:55.825334 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:20:55 crc kubenswrapper[4687]: I0312 16:20:55.847449 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podStartSLOduration=1.866296369 podStartE2EDuration="5.847428456s" podCreationTimestamp="2026-03-12 16:20:50 +0000 UTC" firstStartedPulling="2026-03-12 16:20:51.618168227 +0000 UTC m=+1100.582130581" lastFinishedPulling="2026-03-12 16:20:55.599300324 +0000 UTC m=+1104.563262668" observedRunningTime="2026-03-12 16:20:55.845523853 +0000 UTC m=+1104.809486207" watchObservedRunningTime="2026-03-12 16:20:55.847428456 +0000 UTC m=+1104.811390810" Mar 12 16:21:00 crc kubenswrapper[4687]: I0312 16:21:00.283249 4687 scope.go:117] "RemoveContainer" containerID="4ba53f969ae490d513ef540bde2e74409bf5bbead56c4785d9982033eff2fe9b" Mar 12 16:21:01 crc kubenswrapper[4687]: I0312 16:21:01.143459 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 16:21:14 crc kubenswrapper[4687]: I0312 16:21:14.122146 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:21:14 crc kubenswrapper[4687]: I0312 16:21:14.122711 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.053250 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.054691 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.057294 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rpkdl" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.085066 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.086112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.090803 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ssxn4" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.101835 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.103018 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.105003 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-z2jng" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.119215 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.126692 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.127843 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.130183 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-447p4" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.136289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqgc\" (UniqueName: \"kubernetes.io/projected/26adb4e9-0197-4023-b876-afbb572f93d8-kube-api-access-tvqgc\") pod \"barbican-operator-controller-manager-677bd678f7-vqhcv\" (UID: \"26adb4e9-0197-4023-b876-afbb572f93d8\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.136537 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.137529 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.146337 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pmdct" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.147563 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.172047 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.184330 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.204904 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.212408 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.219902 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.225113 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-28hdm" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.227587 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.229375 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.239417 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-r4pmb" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.239601 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.240952 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkbd2\" (UniqueName: \"kubernetes.io/projected/af6289c5-2a9a-4429-96d6-3c7bbff706e0-kube-api-access-lkbd2\") pod \"designate-operator-controller-manager-66d56f6ff4-9rsqr\" (UID: \"af6289c5-2a9a-4429-96d6-3c7bbff706e0\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.241002 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnjn\" (UniqueName: \"kubernetes.io/projected/63f537cc-6a26-4a05-9b17-80549297e9f2-kube-api-access-ppnjn\") pod \"glance-operator-controller-manager-5964f64c48-65vx5\" (UID: \"63f537cc-6a26-4a05-9b17-80549297e9f2\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.241070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnnv\" (UniqueName: \"kubernetes.io/projected/64a70e69-432d-4ddc-8eef-e16f4e374c56-kube-api-access-lfnnv\") pod \"heat-operator-controller-manager-77b6666d85-nq67j\" (UID: \"64a70e69-432d-4ddc-8eef-e16f4e374c56\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.241104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqgc\" (UniqueName: \"kubernetes.io/projected/26adb4e9-0197-4023-b876-afbb572f93d8-kube-api-access-tvqgc\") pod \"barbican-operator-controller-manager-677bd678f7-vqhcv\" (UID: \"26adb4e9-0197-4023-b876-afbb572f93d8\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.241188 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5mhh\" (UniqueName: \"kubernetes.io/projected/0fc40919-64b1-4b8c-ab92-b9297cb5c352-kube-api-access-x5mhh\") pod \"cinder-operator-controller-manager-984cd4dcf-dx9rg\" (UID: \"0fc40919-64b1-4b8c-ab92-b9297cb5c352\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.260744 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.272979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqgc\" (UniqueName: \"kubernetes.io/projected/26adb4e9-0197-4023-b876-afbb572f93d8-kube-api-access-tvqgc\") pod \"barbican-operator-controller-manager-677bd678f7-vqhcv\" (UID: \"26adb4e9-0197-4023-b876-afbb572f93d8\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.305933 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.307317 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.320520 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7zc42" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.333298 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.342946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnnv\" (UniqueName: \"kubernetes.io/projected/64a70e69-432d-4ddc-8eef-e16f4e374c56-kube-api-access-lfnnv\") pod \"heat-operator-controller-manager-77b6666d85-nq67j\" (UID: \"64a70e69-432d-4ddc-8eef-e16f4e374c56\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.342998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjqqb\" (UniqueName: \"kubernetes.io/projected/43d0733d-5a4f-4b51-a95e-eb2cf8593545-kube-api-access-sjqqb\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.343035 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.343089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2s82\" (UniqueName: \"kubernetes.io/projected/718e95dd-fb86-4403-8048-d68f1f23d3ca-kube-api-access-g2s82\") pod \"horizon-operator-controller-manager-6d9d6b584d-d8j97\" (UID: \"718e95dd-fb86-4403-8048-d68f1f23d3ca\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.343114 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5mhh\" (UniqueName: \"kubernetes.io/projected/0fc40919-64b1-4b8c-ab92-b9297cb5c352-kube-api-access-x5mhh\") pod \"cinder-operator-controller-manager-984cd4dcf-dx9rg\" (UID: \"0fc40919-64b1-4b8c-ab92-b9297cb5c352\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.343160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnjn\" (UniqueName: \"kubernetes.io/projected/63f537cc-6a26-4a05-9b17-80549297e9f2-kube-api-access-ppnjn\") pod \"glance-operator-controller-manager-5964f64c48-65vx5\" (UID: \"63f537cc-6a26-4a05-9b17-80549297e9f2\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.343177 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkbd2\" (UniqueName: \"kubernetes.io/projected/af6289c5-2a9a-4429-96d6-3c7bbff706e0-kube-api-access-lkbd2\") pod \"designate-operator-controller-manager-66d56f6ff4-9rsqr\" (UID: \"af6289c5-2a9a-4429-96d6-3c7bbff706e0\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.349464 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.350514 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.365740 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kf88t" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.369920 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.380846 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.396133 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkbd2\" (UniqueName: \"kubernetes.io/projected/af6289c5-2a9a-4429-96d6-3c7bbff706e0-kube-api-access-lkbd2\") pod \"designate-operator-controller-manager-66d56f6ff4-9rsqr\" (UID: \"af6289c5-2a9a-4429-96d6-3c7bbff706e0\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.405999 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.406790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnjn\" (UniqueName: \"kubernetes.io/projected/63f537cc-6a26-4a05-9b17-80549297e9f2-kube-api-access-ppnjn\") pod \"glance-operator-controller-manager-5964f64c48-65vx5\" (UID: \"63f537cc-6a26-4a05-9b17-80549297e9f2\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.407232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5mhh\" (UniqueName: \"kubernetes.io/projected/0fc40919-64b1-4b8c-ab92-b9297cb5c352-kube-api-access-x5mhh\") pod \"cinder-operator-controller-manager-984cd4dcf-dx9rg\" (UID: \"0fc40919-64b1-4b8c-ab92-b9297cb5c352\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.423782 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.438038 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnnv\" (UniqueName: \"kubernetes.io/projected/64a70e69-432d-4ddc-8eef-e16f4e374c56-kube-api-access-lfnnv\") pod \"heat-operator-controller-manager-77b6666d85-nq67j\" (UID: \"64a70e69-432d-4ddc-8eef-e16f4e374c56\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.446302 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn58w\" (UniqueName: \"kubernetes.io/projected/79d0c51f-999a-4e39-b6b5-aecf10472a4c-kube-api-access-rn58w\") pod \"keystone-operator-controller-manager-684f77d66d-68xmx\" (UID: \"79d0c51f-999a-4e39-b6b5-aecf10472a4c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.446355 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjqqb\" (UniqueName: \"kubernetes.io/projected/43d0733d-5a4f-4b51-a95e-eb2cf8593545-kube-api-access-sjqqb\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.446415 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.446469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2s82\" (UniqueName: \"kubernetes.io/projected/718e95dd-fb86-4403-8048-d68f1f23d3ca-kube-api-access-g2s82\") pod \"horizon-operator-controller-manager-6d9d6b584d-d8j97\" (UID: \"718e95dd-fb86-4403-8048-d68f1f23d3ca\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.446528 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvfc\" (UniqueName: \"kubernetes.io/projected/7569aa35-67ce-43f4-8e4c-f851973745d9-kube-api-access-tmvfc\") pod \"ironic-operator-controller-manager-6bbb499bbc-8jrnk\" (UID: \"7569aa35-67ce-43f4-8e4c-f851973745d9\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:21:43 crc kubenswrapper[4687]: E0312 16:21:43.446668 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:43 crc kubenswrapper[4687]: E0312 16:21:43.446713 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert podName:43d0733d-5a4f-4b51-a95e-eb2cf8593545 nodeName:}" failed. No retries permitted until 2026-03-12 16:21:43.94669436 +0000 UTC m=+1152.910656694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert") pod "infra-operator-controller-manager-5995f4446f-7cchw" (UID: "43d0733d-5a4f-4b51-a95e-eb2cf8593545") : secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.449031 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.450003 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.451350 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.453402 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hmftm" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.464219 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.473473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.488424 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.489571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.494020 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fbfj7" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.504596 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjqqb\" (UniqueName: \"kubernetes.io/projected/43d0733d-5a4f-4b51-a95e-eb2cf8593545-kube-api-access-sjqqb\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.514536 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2s82\" (UniqueName: \"kubernetes.io/projected/718e95dd-fb86-4403-8048-d68f1f23d3ca-kube-api-access-g2s82\") pod \"horizon-operator-controller-manager-6d9d6b584d-d8j97\" (UID: \"718e95dd-fb86-4403-8048-d68f1f23d3ca\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.542444 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.543434 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.550440 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvfc\" (UniqueName: \"kubernetes.io/projected/7569aa35-67ce-43f4-8e4c-f851973745d9-kube-api-access-tmvfc\") pod \"ironic-operator-controller-manager-6bbb499bbc-8jrnk\" (UID: \"7569aa35-67ce-43f4-8e4c-f851973745d9\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.550493 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn58w\" (UniqueName: \"kubernetes.io/projected/79d0c51f-999a-4e39-b6b5-aecf10472a4c-kube-api-access-rn58w\") pod \"keystone-operator-controller-manager-684f77d66d-68xmx\" (UID: \"79d0c51f-999a-4e39-b6b5-aecf10472a4c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.550550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbmx\" (UniqueName: \"kubernetes.io/projected/39ab069c-1ccd-4ad4-b4ea-b71b1b09472f-kube-api-access-dfbmx\") pod \"mariadb-operator-controller-manager-658d4cdd5-wsq7q\" (UID: \"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.550578 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7clfn\" (UniqueName: \"kubernetes.io/projected/1c9c8552-26b0-408f-bd09-40c74041cbfa-kube-api-access-7clfn\") pod \"manila-operator-controller-manager-68f45f9d9f-hmn45\" (UID: \"1c9c8552-26b0-408f-bd09-40c74041cbfa\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.551941 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dkfkn" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.555973 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.573162 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.583440 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.588416 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvfc\" (UniqueName: \"kubernetes.io/projected/7569aa35-67ce-43f4-8e4c-f851973745d9-kube-api-access-tmvfc\") pod \"ironic-operator-controller-manager-6bbb499bbc-8jrnk\" (UID: \"7569aa35-67ce-43f4-8e4c-f851973745d9\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.589296 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn58w\" (UniqueName: \"kubernetes.io/projected/79d0c51f-999a-4e39-b6b5-aecf10472a4c-kube-api-access-rn58w\") pod \"keystone-operator-controller-manager-684f77d66d-68xmx\" (UID: \"79d0c51f-999a-4e39-b6b5-aecf10472a4c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.598348 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.599311 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.604143 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.606439 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7cl2x" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.623327 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.624429 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.624524 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.628137 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p9tgn" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.646473 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.647447 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.650605 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rgg27" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.652144 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbmx\" (UniqueName: \"kubernetes.io/projected/39ab069c-1ccd-4ad4-b4ea-b71b1b09472f-kube-api-access-dfbmx\") pod \"mariadb-operator-controller-manager-658d4cdd5-wsq7q\" (UID: \"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.652189 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7clfn\" (UniqueName: \"kubernetes.io/projected/1c9c8552-26b0-408f-bd09-40c74041cbfa-kube-api-access-7clfn\") pod \"manila-operator-controller-manager-68f45f9d9f-hmn45\" (UID: \"1c9c8552-26b0-408f-bd09-40c74041cbfa\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.652344 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhgm5\" (UniqueName: \"kubernetes.io/projected/15c585dd-9efa-430b-aeb5-42eaeace0d18-kube-api-access-qhgm5\") pod \"neutron-operator-controller-manager-776c5696bf-cds6d\" (UID: \"15c585dd-9efa-430b-aeb5-42eaeace0d18\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.655141 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.656351 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.667509 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.668802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.670848 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.677202 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.677420 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dw2gj" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.685990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbmx\" (UniqueName: \"kubernetes.io/projected/39ab069c-1ccd-4ad4-b4ea-b71b1b09472f-kube-api-access-dfbmx\") pod \"mariadb-operator-controller-manager-658d4cdd5-wsq7q\" (UID: \"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.687408 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.692782 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7clfn\" (UniqueName: \"kubernetes.io/projected/1c9c8552-26b0-408f-bd09-40c74041cbfa-kube-api-access-7clfn\") pod \"manila-operator-controller-manager-68f45f9d9f-hmn45\" (UID: \"1c9c8552-26b0-408f-bd09-40c74041cbfa\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.700433 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.735724 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.741835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.747947 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kjxz9" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.756001 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95mg4\" (UniqueName: \"kubernetes.io/projected/1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0-kube-api-access-95mg4\") pod \"ovn-operator-controller-manager-bbc5b68f9-str9n\" (UID: \"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.756075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.756105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2zm\" (UniqueName: \"kubernetes.io/projected/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-kube-api-access-rj2zm\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.756160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67942\" (UniqueName: \"kubernetes.io/projected/a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7-kube-api-access-67942\") pod \"octavia-operator-controller-manager-5f4f55cb5c-kjnx8\" (UID: \"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.756202 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhgm5\" (UniqueName: \"kubernetes.io/projected/15c585dd-9efa-430b-aeb5-42eaeace0d18-kube-api-access-qhgm5\") pod \"neutron-operator-controller-manager-776c5696bf-cds6d\" (UID: \"15c585dd-9efa-430b-aeb5-42eaeace0d18\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.756219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd8k6\" (UniqueName: \"kubernetes.io/projected/90a35858-7aa2-450f-af1f-9686c8be3863-kube-api-access-kd8k6\") pod \"nova-operator-controller-manager-569cc54c5-d9k4b\" (UID: \"90a35858-7aa2-450f-af1f-9686c8be3863\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.777663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.779554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhgm5\" (UniqueName: \"kubernetes.io/projected/15c585dd-9efa-430b-aeb5-42eaeace0d18-kube-api-access-qhgm5\") pod \"neutron-operator-controller-manager-776c5696bf-cds6d\" (UID: \"15c585dd-9efa-430b-aeb5-42eaeace0d18\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.816592 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-v98w4"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.821569 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.827870 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5h2jb" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.833855 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.833930 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-v98w4"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.859743 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.859968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2zm\" (UniqueName: \"kubernetes.io/projected/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-kube-api-access-rj2zm\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.860111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67942\" (UniqueName: \"kubernetes.io/projected/a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7-kube-api-access-67942\") pod \"octavia-operator-controller-manager-5f4f55cb5c-kjnx8\" (UID: \"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.860246 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd8k6\" (UniqueName: \"kubernetes.io/projected/90a35858-7aa2-450f-af1f-9686c8be3863-kube-api-access-kd8k6\") pod \"nova-operator-controller-manager-569cc54c5-d9k4b\" (UID: \"90a35858-7aa2-450f-af1f-9686c8be3863\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.860433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95mg4\" (UniqueName: \"kubernetes.io/projected/1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0-kube-api-access-95mg4\") pod \"ovn-operator-controller-manager-bbc5b68f9-str9n\" (UID: \"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.860538 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krcmc\" (UniqueName: \"kubernetes.io/projected/ecc97932-9eae-4d08-910b-b68e0e7d8002-kube-api-access-krcmc\") pod \"placement-operator-controller-manager-574d45c66c-4rpv5\" (UID: \"ecc97932-9eae-4d08-910b-b68e0e7d8002\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:21:43 crc kubenswrapper[4687]: E0312 16:21:43.860740 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:43 crc kubenswrapper[4687]: E0312 16:21:43.860862 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert podName:dc5ebdf2-e54a-4c66-abb7-35039f9226dc nodeName:}" failed. No retries permitted until 2026-03-12 16:21:44.360844411 +0000 UTC m=+1153.324806755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" (UID: "dc5ebdf2-e54a-4c66-abb7-35039f9226dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.861660 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.870939 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.872512 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.886219 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-frdqp" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.886291 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.887363 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.896432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67942\" (UniqueName: \"kubernetes.io/projected/a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7-kube-api-access-67942\") pod \"octavia-operator-controller-manager-5f4f55cb5c-kjnx8\" (UID: \"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.968467 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.968533 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7ks5\" (UniqueName: \"kubernetes.io/projected/066b8087-d58d-4c75-a4bb-4a4b26710855-kube-api-access-w7ks5\") pod \"telemetry-operator-controller-manager-748fccb5bd-pgbh8\" (UID: \"066b8087-d58d-4c75-a4bb-4a4b26710855\") " pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.968672 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krcmc\" (UniqueName: \"kubernetes.io/projected/ecc97932-9eae-4d08-910b-b68e0e7d8002-kube-api-access-krcmc\") pod \"placement-operator-controller-manager-574d45c66c-4rpv5\" (UID: \"ecc97932-9eae-4d08-910b-b68e0e7d8002\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.968697 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvk8g\" (UniqueName: \"kubernetes.io/projected/65afd209-a452-442f-853d-d2e062fa2530-kube-api-access-pvk8g\") pod \"swift-operator-controller-manager-677c674df7-v98w4\" (UID: \"65afd209-a452-442f-853d-d2e062fa2530\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:21:43 crc kubenswrapper[4687]: E0312 16:21:43.968928 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:43 crc kubenswrapper[4687]: E0312 16:21:43.968983 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert podName:43d0733d-5a4f-4b51-a95e-eb2cf8593545 nodeName:}" failed. No retries permitted until 2026-03-12 16:21:44.968965883 +0000 UTC m=+1153.932928227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert") pod "infra-operator-controller-manager-5995f4446f-7cchw" (UID: "43d0733d-5a4f-4b51-a95e-eb2cf8593545") : secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.970605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2zm\" (UniqueName: \"kubernetes.io/projected/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-kube-api-access-rj2zm\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.981774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95mg4\" (UniqueName: \"kubernetes.io/projected/1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0-kube-api-access-95mg4\") pod \"ovn-operator-controller-manager-bbc5b68f9-str9n\" (UID: \"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.984936 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp"] Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.987129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.992580 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zxt9q" Mar 12 16:21:43 crc kubenswrapper[4687]: I0312 16:21:43.993563 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krcmc\" (UniqueName: \"kubernetes.io/projected/ecc97932-9eae-4d08-910b-b68e0e7d8002-kube-api-access-krcmc\") pod \"placement-operator-controller-manager-574d45c66c-4rpv5\" (UID: \"ecc97932-9eae-4d08-910b-b68e0e7d8002\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.000995 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.009424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd8k6\" (UniqueName: \"kubernetes.io/projected/90a35858-7aa2-450f-af1f-9686c8be3863-kube-api-access-kd8k6\") pod \"nova-operator-controller-manager-569cc54c5-d9k4b\" (UID: \"90a35858-7aa2-450f-af1f-9686c8be3863\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.019863 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.064275 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.073210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvk8g\" (UniqueName: \"kubernetes.io/projected/65afd209-a452-442f-853d-d2e062fa2530-kube-api-access-pvk8g\") pod \"swift-operator-controller-manager-677c674df7-v98w4\" (UID: \"65afd209-a452-442f-853d-d2e062fa2530\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.073430 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5t5\" (UniqueName: \"kubernetes.io/projected/616298f1-0baf-428d-9bb9-3a87f52085e8-kube-api-access-7h5t5\") pod \"test-operator-controller-manager-5c5cb9c4d7-ft9mp\" (UID: \"616298f1-0baf-428d-9bb9-3a87f52085e8\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.073548 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7ks5\" (UniqueName: \"kubernetes.io/projected/066b8087-d58d-4c75-a4bb-4a4b26710855-kube-api-access-w7ks5\") pod \"telemetry-operator-controller-manager-748fccb5bd-pgbh8\" (UID: \"066b8087-d58d-4c75-a4bb-4a4b26710855\") " pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.074176 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.093008 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.095127 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7ks5\" (UniqueName: \"kubernetes.io/projected/066b8087-d58d-4c75-a4bb-4a4b26710855-kube-api-access-w7ks5\") pod \"telemetry-operator-controller-manager-748fccb5bd-pgbh8\" (UID: \"066b8087-d58d-4c75-a4bb-4a4b26710855\") " pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.099875 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvk8g\" (UniqueName: \"kubernetes.io/projected/65afd209-a452-442f-853d-d2e062fa2530-kube-api-access-pvk8g\") pod \"swift-operator-controller-manager-677c674df7-v98w4\" (UID: \"65afd209-a452-442f-853d-d2e062fa2530\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.114843 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.115090 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.118648 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2gct2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.121418 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.121449 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.179064 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4fn\" (UniqueName: \"kubernetes.io/projected/85d59f34-51a3-4c41-836e-9cc32f5da5e4-kube-api-access-6l4fn\") pod \"watcher-operator-controller-manager-6dd88c6f67-hngln\" (UID: \"85d59f34-51a3-4c41-836e-9cc32f5da5e4\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.179138 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5t5\" (UniqueName: \"kubernetes.io/projected/616298f1-0baf-428d-9bb9-3a87f52085e8-kube-api-access-7h5t5\") pod \"test-operator-controller-manager-5c5cb9c4d7-ft9mp\" (UID: \"616298f1-0baf-428d-9bb9-3a87f52085e8\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.202341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5t5\" (UniqueName: \"kubernetes.io/projected/616298f1-0baf-428d-9bb9-3a87f52085e8-kube-api-access-7h5t5\") pod \"test-operator-controller-manager-5c5cb9c4d7-ft9mp\" (UID: \"616298f1-0baf-428d-9bb9-3a87f52085e8\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.204175 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.206020 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.209074 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.209254 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-n62xp" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.209715 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.232804 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.256430 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" event={"ID":"26adb4e9-0197-4023-b876-afbb572f93d8","Type":"ContainerStarted","Data":"ed16dceec1a10d01b3b6bc33f04f7f10d024b9605271679fa613fb83b2931310"} Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.259920 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.261835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.266986 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4lqm7" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.280201 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.281466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4fn\" (UniqueName: \"kubernetes.io/projected/85d59f34-51a3-4c41-836e-9cc32f5da5e4-kube-api-access-6l4fn\") pod \"watcher-operator-controller-manager-6dd88c6f67-hngln\" (UID: \"85d59f34-51a3-4c41-836e-9cc32f5da5e4\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.281507 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8hl\" (UniqueName: \"kubernetes.io/projected/9af65423-8d26-4ff5-97ee-711dc0c4501b-kube-api-access-wk8hl\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.281535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.281611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.287627 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.303212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4fn\" (UniqueName: \"kubernetes.io/projected/85d59f34-51a3-4c41-836e-9cc32f5da5e4-kube-api-access-6l4fn\") pod \"watcher-operator-controller-manager-6dd88c6f67-hngln\" (UID: \"85d59f34-51a3-4c41-836e-9cc32f5da5e4\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.303649 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.309043 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.338645 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.383508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tbb\" (UniqueName: \"kubernetes.io/projected/bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2-kube-api-access-n6tbb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-g4z6z\" (UID: \"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.383679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.383795 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8hl\" (UniqueName: \"kubernetes.io/projected/9af65423-8d26-4ff5-97ee-711dc0c4501b-kube-api-access-wk8hl\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.383914 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.384093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.383859 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.384527 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.384575 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.386514 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert podName:dc5ebdf2-e54a-4c66-abb7-35039f9226dc nodeName:}" failed. No retries permitted until 2026-03-12 16:21:45.38624142 +0000 UTC m=+1154.350203764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" (UID: "dc5ebdf2-e54a-4c66-abb7-35039f9226dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.386659 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:44.886641461 +0000 UTC m=+1153.850603815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "metrics-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.387751 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:44.887738501 +0000 UTC m=+1153.851700845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.405824 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.412725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8hl\" (UniqueName: \"kubernetes.io/projected/9af65423-8d26-4ff5-97ee-711dc0c4501b-kube-api-access-wk8hl\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.469533 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.487982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6tbb\" (UniqueName: \"kubernetes.io/projected/bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2-kube-api-access-n6tbb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-g4z6z\" (UID: \"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.504992 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6tbb\" (UniqueName: \"kubernetes.io/projected/bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2-kube-api-access-n6tbb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-g4z6z\" (UID: \"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.667534 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.777080 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.832975 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr"] Mar 12 16:21:44 crc kubenswrapper[4687]: W0312 16:21:44.833118 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a70e69_432d_4ddc_8eef_e16f4e374c56.slice/crio-d273fbf35cb9e6b4e95a399cc442728322f499b0205ec0051fe609c736795edb WatchSource:0}: Error finding container d273fbf35cb9e6b4e95a399cc442728322f499b0205ec0051fe609c736795edb: Status 404 returned error can't find the container with id d273fbf35cb9e6b4e95a399cc442728322f499b0205ec0051fe609c736795edb Mar 12 16:21:44 crc kubenswrapper[4687]: W0312 16:21:44.836411 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6289c5_2a9a_4429_96d6_3c7bbff706e0.slice/crio-d1ed7c8ff3944138f2e5c371987dcee621f71d3266b6e85060c49060ef12d32f WatchSource:0}: Error finding container d1ed7c8ff3944138f2e5c371987dcee621f71d3266b6e85060c49060ef12d32f: Status 404 returned error can't find the container with id d1ed7c8ff3944138f2e5c371987dcee621f71d3266b6e85060c49060ef12d32f Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.839242 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j"] Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.895064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.895255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.895599 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.895710 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:45.895685039 +0000 UTC m=+1154.859647423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.896231 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.896289 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:45.896273655 +0000 UTC m=+1154.860236039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "metrics-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: I0312 16:21:44.996303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.996496 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:44 crc kubenswrapper[4687]: E0312 16:21:44.996542 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert podName:43d0733d-5a4f-4b51-a95e-eb2cf8593545 nodeName:}" failed. No retries permitted until 2026-03-12 16:21:46.99652695 +0000 UTC m=+1155.960489294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert") pod "infra-operator-controller-manager-5995f4446f-7cchw" (UID: "43d0733d-5a4f-4b51-a95e-eb2cf8593545") : secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.269001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" event={"ID":"af6289c5-2a9a-4429-96d6-3c7bbff706e0","Type":"ContainerStarted","Data":"d1ed7c8ff3944138f2e5c371987dcee621f71d3266b6e85060c49060ef12d32f"} Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.270028 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" event={"ID":"64a70e69-432d-4ddc-8eef-e16f4e374c56","Type":"ContainerStarted","Data":"d273fbf35cb9e6b4e95a399cc442728322f499b0205ec0051fe609c736795edb"} Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.271229 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" event={"ID":"63f537cc-6a26-4a05-9b17-80549297e9f2","Type":"ContainerStarted","Data":"daabe26a37b70041a94b6108ca41e861bd417924893de39365517b9e36c59982"} Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.391980 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk"] Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.402802 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.402970 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.403027 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert podName:dc5ebdf2-e54a-4c66-abb7-35039f9226dc nodeName:}" failed. No retries permitted until 2026-03-12 16:21:47.403009009 +0000 UTC m=+1156.366971353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" (UID: "dc5ebdf2-e54a-4c66-abb7-35039f9226dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.410691 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q"] Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.426467 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc40919_64b1_4b8c_ab92_b9297cb5c352.slice/crio-e89ef02ade9c51b81c90e3dbedc1e3d88e6c35002bcdf1d604f1a8e1825f613d WatchSource:0}: Error finding container e89ef02ade9c51b81c90e3dbedc1e3d88e6c35002bcdf1d604f1a8e1825f613d: Status 404 returned error can't find the container with id e89ef02ade9c51b81c90e3dbedc1e3d88e6c35002bcdf1d604f1a8e1825f613d Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.430705 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg"] Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.452677 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97"] Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.466905 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx"] Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.467769 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7569aa35_67ce_43f4_8e4c_f851973745d9.slice/crio-ae4cd22c8d82d6cb77ea1f2dcd15128dbd2a3f9fb3c3154614cffd1b1adf3914 WatchSource:0}: Error finding container ae4cd22c8d82d6cb77ea1f2dcd15128dbd2a3f9fb3c3154614cffd1b1adf3914: Status 404 returned error can't find the container with id ae4cd22c8d82d6cb77ea1f2dcd15128dbd2a3f9fb3c3154614cffd1b1adf3914 Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.494151 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n"] Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.512274 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45"] Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.513864 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d"] Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.922072 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.922210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.922399 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.922463 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:47.922444843 +0000 UTC m=+1156.886407187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "webhook-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.922525 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.922555 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:47.922545596 +0000 UTC m=+1156.886507940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "metrics-server-cert" not found Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.924776 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-v98w4"] Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.935696 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc97932_9eae_4d08_910b_b68e0e7d8002.slice/crio-fc2752672421ddeeaa0ca620f4e2f00ce286162b6c5a9cc15a4dcd4b950ffd52 WatchSource:0}: Error finding container fc2752672421ddeeaa0ca620f4e2f00ce286162b6c5a9cc15a4dcd4b950ffd52: Status 404 returned error can't find the container with id fc2752672421ddeeaa0ca620f4e2f00ce286162b6c5a9cc15a4dcd4b950ffd52 Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.938824 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65afd209_a452_442f_853d_d2e062fa2530.slice/crio-c86c4f58872cecd2358774d076695a8fceb13c0059935981a46034758faf5eeb WatchSource:0}: Error finding container c86c4f58872cecd2358774d076695a8fceb13c0059935981a46034758faf5eeb: Status 404 returned error can't find the container with id c86c4f58872cecd2358774d076695a8fceb13c0059935981a46034758faf5eeb Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.941074 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d59f34_51a3_4c41_836e_9cc32f5da5e4.slice/crio-ae72fa23669f02f7feeb55fa7d2bf8350c3ef50fa38ee54733d1b71e1d292ff8 WatchSource:0}: Error finding container ae72fa23669f02f7feeb55fa7d2bf8350c3ef50fa38ee54733d1b71e1d292ff8: Status 404 returned error can't find the container with id ae72fa23669f02f7feeb55fa7d2bf8350c3ef50fa38ee54733d1b71e1d292ff8 Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.949983 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616298f1_0baf_428d_9bb9_3a87f52085e8.slice/crio-68b725dbf0ebb973bfb39fdae51a54ade8a6842bf4a0fdcd9bbd5f53d437cdd5 WatchSource:0}: Error finding container 68b725dbf0ebb973bfb39fdae51a54ade8a6842bf4a0fdcd9bbd5f53d437cdd5: Status 404 returned error can't find the container with id 68b725dbf0ebb973bfb39fdae51a54ade8a6842bf4a0fdcd9bbd5f53d437cdd5 Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.954736 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90a35858_7aa2_450f_af1f_9686c8be3863.slice/crio-0556962a079e2435c188e9238aa374ac69ae29fd8a6a51e37bedce4558249147 WatchSource:0}: Error finding container 0556962a079e2435c188e9238aa374ac69ae29fd8a6a51e37bedce4558249147: Status 404 returned error can't find the container with id 0556962a079e2435c188e9238aa374ac69ae29fd8a6a51e37bedce4558249147 Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.956011 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp"] Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.978656 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l4fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd88c6f67-hngln_openstack-operators(85d59f34-51a3-4c41-836e-9cc32f5da5e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.978834 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kd8k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-d9k4b_openstack-operators(90a35858-7aa2-450f-af1f-9686c8be3863): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.979965 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.980081 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" Mar 12 16:21:45 crc kubenswrapper[4687]: W0312 16:21:45.980421 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc24e0b5_b9e5_44a3_b36c_dad06da1c2e2.slice/crio-41d0969e25e66e15ab2823f647895aadc874449c9cfd18bf99d113778dd72378 WatchSource:0}: Error finding container 41d0969e25e66e15ab2823f647895aadc874449c9cfd18bf99d113778dd72378: Status 404 returned error can't find the container with id 41d0969e25e66e15ab2823f647895aadc874449c9cfd18bf99d113778dd72378 Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.992533 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6tbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-g4z6z_openstack-operators(bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.992746 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67942,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-kjnx8_openstack-operators(a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 16:21:45 crc kubenswrapper[4687]: I0312 16:21:45.993810 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5"] Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.994052 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" Mar 12 16:21:45 crc kubenswrapper[4687]: E0312 16:21:45.994136 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" podUID="bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2" Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.000687 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8"] Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.008766 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln"] Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.023413 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b"] Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.023463 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z"] Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.028993 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8"] Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.343587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" event={"ID":"65afd209-a452-442f-853d-d2e062fa2530","Type":"ContainerStarted","Data":"c86c4f58872cecd2358774d076695a8fceb13c0059935981a46034758faf5eeb"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.345574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" event={"ID":"7569aa35-67ce-43f4-8e4c-f851973745d9","Type":"ContainerStarted","Data":"ae4cd22c8d82d6cb77ea1f2dcd15128dbd2a3f9fb3c3154614cffd1b1adf3914"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.357635 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" event={"ID":"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0","Type":"ContainerStarted","Data":"97544109ba16a91a96ce7e5fceb3386fe3a811106b93d5516a038911e75b3f5a"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.360662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" event={"ID":"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f","Type":"ContainerStarted","Data":"e0e8434de1dc00eb6668b53277421168634dc44289948642c5b039a526dcce39"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.366764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" event={"ID":"1c9c8552-26b0-408f-bd09-40c74041cbfa","Type":"ContainerStarted","Data":"426cb7fee3d68914d31193ad4da371b0b7b28eb830c3d8cd36b3ec4a35c9f713"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.383628 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" event={"ID":"ecc97932-9eae-4d08-910b-b68e0e7d8002","Type":"ContainerStarted","Data":"fc2752672421ddeeaa0ca620f4e2f00ce286162b6c5a9cc15a4dcd4b950ffd52"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.403642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" event={"ID":"0fc40919-64b1-4b8c-ab92-b9297cb5c352","Type":"ContainerStarted","Data":"e89ef02ade9c51b81c90e3dbedc1e3d88e6c35002bcdf1d604f1a8e1825f613d"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.431576 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" event={"ID":"85d59f34-51a3-4c41-836e-9cc32f5da5e4","Type":"ContainerStarted","Data":"ae72fa23669f02f7feeb55fa7d2bf8350c3ef50fa38ee54733d1b71e1d292ff8"} Mar 12 16:21:46 crc kubenswrapper[4687]: E0312 16:21:46.437121 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.462584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" event={"ID":"616298f1-0baf-428d-9bb9-3a87f52085e8","Type":"ContainerStarted","Data":"68b725dbf0ebb973bfb39fdae51a54ade8a6842bf4a0fdcd9bbd5f53d437cdd5"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.470626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" event={"ID":"15c585dd-9efa-430b-aeb5-42eaeace0d18","Type":"ContainerStarted","Data":"ad89e1822c1e141db0642cfd05ccaa46d239de261548c9749fabe87c27f5a406"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.476052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" event={"ID":"79d0c51f-999a-4e39-b6b5-aecf10472a4c","Type":"ContainerStarted","Data":"228aa39af7492c92dd3177231f0ede86535a663f09f946508f7512840ce3a988"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.484096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" event={"ID":"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7","Type":"ContainerStarted","Data":"1bd83d3626ae7dddd35e462a64164a8fbac013d371ffdbd6d716ede37a41373b"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.493598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" event={"ID":"066b8087-d58d-4c75-a4bb-4a4b26710855","Type":"ContainerStarted","Data":"25351534f03a1db86a88c89f1153a86d583ea1d3b0a3ed89648c3de0a17ffb33"} Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.494687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" event={"ID":"718e95dd-fb86-4403-8048-d68f1f23d3ca","Type":"ContainerStarted","Data":"cfd75cede2d33f45311b06469801c86535f7efd4554c63cb4a8eda4f8e3cebf5"} Mar 12 16:21:46 crc kubenswrapper[4687]: E0312 16:21:46.498582 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.528619 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" event={"ID":"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2","Type":"ContainerStarted","Data":"41d0969e25e66e15ab2823f647895aadc874449c9cfd18bf99d113778dd72378"} Mar 12 16:21:46 crc kubenswrapper[4687]: E0312 16:21:46.532561 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" podUID="bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2" Mar 12 16:21:46 crc kubenswrapper[4687]: I0312 16:21:46.543707 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" event={"ID":"90a35858-7aa2-450f-af1f-9686c8be3863","Type":"ContainerStarted","Data":"0556962a079e2435c188e9238aa374ac69ae29fd8a6a51e37bedce4558249147"} Mar 12 16:21:46 crc kubenswrapper[4687]: E0312 16:21:46.551030 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" Mar 12 16:21:47 crc kubenswrapper[4687]: I0312 16:21:47.057138 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.057359 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.057423 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert podName:43d0733d-5a4f-4b51-a95e-eb2cf8593545 nodeName:}" failed. No retries permitted until 2026-03-12 16:21:51.057408492 +0000 UTC m=+1160.021370836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert") pod "infra-operator-controller-manager-5995f4446f-7cchw" (UID: "43d0733d-5a4f-4b51-a95e-eb2cf8593545") : secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:47 crc kubenswrapper[4687]: I0312 16:21:47.477722 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.477963 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.478237 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert podName:dc5ebdf2-e54a-4c66-abb7-35039f9226dc nodeName:}" failed. No retries permitted until 2026-03-12 16:21:51.478037771 +0000 UTC m=+1160.442000115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" (UID: "dc5ebdf2-e54a-4c66-abb7-35039f9226dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.552831 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.553590 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" podUID="bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2" Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.553853 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" Mar 12 16:21:47 crc kubenswrapper[4687]: E0312 16:21:47.554769 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" Mar 12 16:21:48 crc kubenswrapper[4687]: I0312 16:21:47.999899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:48 crc kubenswrapper[4687]: E0312 16:21:48.000081 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 16:21:48 crc kubenswrapper[4687]: I0312 16:21:48.000334 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:48 crc kubenswrapper[4687]: E0312 16:21:48.000353 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:52.000334534 +0000 UTC m=+1160.964296888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "webhook-server-cert" not found Mar 12 16:21:48 crc kubenswrapper[4687]: E0312 16:21:48.000536 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 16:21:48 crc kubenswrapper[4687]: E0312 16:21:48.000585 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:21:52.00056969 +0000 UTC m=+1160.964532034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "metrics-server-cert" not found Mar 12 16:21:51 crc kubenswrapper[4687]: I0312 16:21:51.155939 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:51 crc kubenswrapper[4687]: E0312 16:21:51.156129 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:51 crc kubenswrapper[4687]: E0312 16:21:51.156217 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert podName:43d0733d-5a4f-4b51-a95e-eb2cf8593545 nodeName:}" failed. No retries permitted until 2026-03-12 16:21:59.156196435 +0000 UTC m=+1168.120158789 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert") pod "infra-operator-controller-manager-5995f4446f-7cchw" (UID: "43d0733d-5a4f-4b51-a95e-eb2cf8593545") : secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:51 crc kubenswrapper[4687]: I0312 16:21:51.564127 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:51 crc kubenswrapper[4687]: E0312 16:21:51.564343 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:51 crc kubenswrapper[4687]: E0312 16:21:51.564458 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert podName:dc5ebdf2-e54a-4c66-abb7-35039f9226dc nodeName:}" failed. No retries permitted until 2026-03-12 16:21:59.564435013 +0000 UTC m=+1168.528397357 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" (UID: "dc5ebdf2-e54a-4c66-abb7-35039f9226dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:52 crc kubenswrapper[4687]: I0312 16:21:52.076276 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:52 crc kubenswrapper[4687]: I0312 16:21:52.076799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:21:52 crc kubenswrapper[4687]: E0312 16:21:52.076465 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 16:21:52 crc kubenswrapper[4687]: E0312 16:21:52.076950 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:22:00.076928095 +0000 UTC m=+1169.040890439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "webhook-server-cert" not found Mar 12 16:21:52 crc kubenswrapper[4687]: E0312 16:21:52.077052 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 16:21:52 crc kubenswrapper[4687]: E0312 16:21:52.077203 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:22:00.077150691 +0000 UTC m=+1169.041113115 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "metrics-server-cert" not found Mar 12 16:21:57 crc kubenswrapper[4687]: E0312 16:21:57.777100 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c" Mar 12 16:21:57 crc kubenswrapper[4687]: E0312 16:21:57.777627 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvk8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-677c674df7-v98w4_openstack-operators(65afd209-a452-442f-853d-d2e062fa2530): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:21:57 crc kubenswrapper[4687]: E0312 16:21:57.778967 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podUID="65afd209-a452-442f-853d-d2e062fa2530" Mar 12 16:21:58 crc kubenswrapper[4687]: E0312 16:21:58.647975 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podUID="65afd209-a452-442f-853d-d2e062fa2530" Mar 12 16:21:59 crc kubenswrapper[4687]: I0312 16:21:59.205595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.205812 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.205896 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert podName:43d0733d-5a4f-4b51-a95e-eb2cf8593545 nodeName:}" failed. No retries permitted until 2026-03-12 16:22:15.205872981 +0000 UTC m=+1184.169835335 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert") pod "infra-operator-controller-manager-5995f4446f-7cchw" (UID: "43d0733d-5a4f-4b51-a95e-eb2cf8593545") : secret "infra-operator-webhook-server-cert" not found Mar 12 16:21:59 crc kubenswrapper[4687]: I0312 16:21:59.612572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.612814 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.612949 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert podName:dc5ebdf2-e54a-4c66-abb7-35039f9226dc nodeName:}" failed. No retries permitted until 2026-03-12 16:22:15.612919063 +0000 UTC m=+1184.576881437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" (UID: "dc5ebdf2-e54a-4c66-abb7-35039f9226dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.845090 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4" Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.845296 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7clfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-68f45f9d9f-hmn45_openstack-operators(1c9c8552-26b0-408f-bd09-40c74041cbfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:21:59 crc kubenswrapper[4687]: E0312 16:21:59.846500 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.127916 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.128112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.128143 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.128272 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:22:16.128250381 +0000 UTC m=+1185.092212715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "webhook-server-cert" not found Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.128389 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.128520 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs podName:9af65423-8d26-4ff5-97ee-711dc0c4501b nodeName:}" failed. No retries permitted until 2026-03-12 16:22:16.128491887 +0000 UTC m=+1185.092454331 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs") pod "openstack-operator-controller-manager-dbdf4d967-glnf2" (UID: "9af65423-8d26-4ff5-97ee-711dc0c4501b") : secret "metrics-server-cert" not found Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.130594 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555542-9mtqp"] Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.131756 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.136063 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.136383 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.136684 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.140063 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555542-9mtqp"] Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.232306 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjs4\" (UniqueName: \"kubernetes.io/projected/25c786e4-0569-47e5-b4d6-db2631c509a6-kube-api-access-9rjs4\") pod \"auto-csr-approver-29555542-9mtqp\" (UID: \"25c786e4-0569-47e5-b4d6-db2631c509a6\") " pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.333600 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjs4\" (UniqueName: \"kubernetes.io/projected/25c786e4-0569-47e5-b4d6-db2631c509a6-kube-api-access-9rjs4\") pod \"auto-csr-approver-29555542-9mtqp\" (UID: \"25c786e4-0569-47e5-b4d6-db2631c509a6\") " pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.357945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjs4\" (UniqueName: \"kubernetes.io/projected/25c786e4-0569-47e5-b4d6-db2631c509a6-kube-api-access-9rjs4\") pod \"auto-csr-approver-29555542-9mtqp\" (UID: \"25c786e4-0569-47e5-b4d6-db2631c509a6\") " pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:00 crc kubenswrapper[4687]: I0312 16:22:00.460935 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.593915 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6" Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.594146 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfnnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-77b6666d85-nq67j_openstack-operators(64a70e69-432d-4ddc-8eef-e16f4e374c56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.595331 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.665449 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4\\\"\"" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" Mar 12 16:22:00 crc kubenswrapper[4687]: E0312 16:22:00.667278 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6\\\"\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" Mar 12 16:22:03 crc kubenswrapper[4687]: E0312 16:22:03.718858 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 12 16:22:03 crc kubenswrapper[4687]: E0312 16:22:03.719201 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhgm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-cds6d_openstack-operators(15c585dd-9efa-430b-aeb5-42eaeace0d18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:03 crc kubenswrapper[4687]: E0312 16:22:03.720565 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" podUID="15c585dd-9efa-430b-aeb5-42eaeace0d18" Mar 12 16:22:04 crc kubenswrapper[4687]: E0312 16:22:04.696082 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" podUID="15c585dd-9efa-430b-aeb5-42eaeace0d18" Mar 12 16:22:06 crc kubenswrapper[4687]: E0312 16:22:06.177517 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" Mar 12 16:22:06 crc kubenswrapper[4687]: E0312 16:22:06.177920 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7h5t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-ft9mp_openstack-operators(616298f1-0baf-428d-9bb9-3a87f52085e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:06 crc kubenswrapper[4687]: E0312 16:22:06.179131 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" Mar 12 16:22:06 crc kubenswrapper[4687]: E0312 16:22:06.716509 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" Mar 12 16:22:07 crc kubenswrapper[4687]: E0312 16:22:07.581697 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978" Mar 12 16:22:07 crc kubenswrapper[4687]: E0312 16:22:07.582688 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krcmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-4rpv5_openstack-operators(ecc97932-9eae-4d08-910b-b68e0e7d8002): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:07 crc kubenswrapper[4687]: E0312 16:22:07.584278 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" Mar 12 16:22:07 crc kubenswrapper[4687]: E0312 16:22:07.723228 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.172759 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.172922 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvqgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-677bd678f7-vqhcv_openstack-operators(26adb4e9-0197-4023-b876-afbb572f93d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.174095 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.700795 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.700970 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rn58w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-68xmx_openstack-operators(79d0c51f-999a-4e39-b6b5-aecf10472a4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.702252 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" podUID="79d0c51f-999a-4e39-b6b5-aecf10472a4c" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.730171 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.732595 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" podUID="79d0c51f-999a-4e39-b6b5-aecf10472a4c" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.766890 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.766942 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.767088 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-748fccb5bd-pgbh8_openstack-operators(066b8087-d58d-4c75-a4bb-4a4b26710855): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:22:08 crc kubenswrapper[4687]: E0312 16:22:08.768428 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" Mar 12 16:22:09 crc kubenswrapper[4687]: E0312 16:22:09.769984 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:1a1a9a719889b8cdda26cbd675f0005643a8f9f2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.774234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" event={"ID":"0fc40919-64b1-4b8c-ab92-b9297cb5c352","Type":"ContainerStarted","Data":"5f00b6693c66e586ef439145ff3c614b9879d6d9457c9862982e374cedca0601"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.775003 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.779123 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" event={"ID":"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f","Type":"ContainerStarted","Data":"fb51c3ab3c7abec2b69974709825627059760e54c31d98baf02f04d3fe1273d3"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.779956 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.817047 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podStartSLOduration=6.103974174 podStartE2EDuration="28.817025983s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.447701521 +0000 UTC m=+1154.411663865" lastFinishedPulling="2026-03-12 16:22:08.16075333 +0000 UTC m=+1177.124715674" observedRunningTime="2026-03-12 16:22:11.791565553 +0000 UTC m=+1180.755527897" watchObservedRunningTime="2026-03-12 16:22:11.817025983 +0000 UTC m=+1180.780988327" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.826965 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" podStartSLOduration=5.0250122 podStartE2EDuration="28.826943795s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.457659966 +0000 UTC m=+1154.421622310" lastFinishedPulling="2026-03-12 16:22:09.259591561 +0000 UTC m=+1178.223553905" observedRunningTime="2026-03-12 16:22:11.808601161 +0000 UTC m=+1180.772563505" watchObservedRunningTime="2026-03-12 16:22:11.826943795 +0000 UTC m=+1180.790906139" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:11.853211 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555542-9mtqp"] Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.790266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" event={"ID":"90a35858-7aa2-450f-af1f-9686c8be3863","Type":"ContainerStarted","Data":"b763c0701f82c0d08c9a41a07ec8941cbe64802cbf8f979f95997fbc09d27bab"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.792006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" event={"ID":"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0","Type":"ContainerStarted","Data":"c312668402caae941cf03787fc88f56e7fea4b1c85ce04bf64079be16ee3b007"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.792086 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.794162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" event={"ID":"63f537cc-6a26-4a05-9b17-80549297e9f2","Type":"ContainerStarted","Data":"44b24b3a9e85731fb3c4fcb7090ce068ea1760aa05eaab34d3f6827001819da1"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.794275 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.795994 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" event={"ID":"65afd209-a452-442f-853d-d2e062fa2530","Type":"ContainerStarted","Data":"c3c9cc0876bab18f2073cc3f852cfd5f673361faebddbcc40d4b4a0dfe2a1330"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.796543 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.798634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" event={"ID":"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7","Type":"ContainerStarted","Data":"ab72c1912098b013ef8ddd51c9c1e28e87f56b5067506b72729e257a96d0740c"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.799196 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.801559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" event={"ID":"718e95dd-fb86-4403-8048-d68f1f23d3ca","Type":"ContainerStarted","Data":"7dcb12d2908d034a6d179f551bd3104d15b00ece776cf8bc2033d69e8d445805"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.802209 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.803328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" event={"ID":"7569aa35-67ce-43f4-8e4c-f851973745d9","Type":"ContainerStarted","Data":"eaceb2e7b8d8cb691ad8727c13985df5b57e128f9a64b0c62dbab49e958c946b"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.803882 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.805380 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" event={"ID":"af6289c5-2a9a-4429-96d6-3c7bbff706e0","Type":"ContainerStarted","Data":"a293dffacc759d48161734e7b56274b324c76359d177228a99bdd6bab039aef8"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.805849 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.807155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" event={"ID":"25c786e4-0569-47e5-b4d6-db2631c509a6","Type":"ContainerStarted","Data":"bdb74cf02c065fb2435ec085f457f1f72e45438132baac44bda00e1b2259b21f"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.808454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" event={"ID":"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2","Type":"ContainerStarted","Data":"178e2c7cfb9f0f9009884b4440ce2bbebbe79dfff88909ca6e9fd76c7fea6846"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.809721 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" event={"ID":"85d59f34-51a3-4c41-836e-9cc32f5da5e4","Type":"ContainerStarted","Data":"8dbf42c1a1eeb118ef1bf8e6b239d81bdecb767fc0e497d1aae3d794b8b381df"} Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.809930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.862290 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podStartSLOduration=6.161995488 podStartE2EDuration="29.86227415s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.558844406 +0000 UTC m=+1154.522806750" lastFinishedPulling="2026-03-12 16:22:09.259123068 +0000 UTC m=+1178.223085412" observedRunningTime="2026-03-12 16:22:12.860505491 +0000 UTC m=+1181.824467835" watchObservedRunningTime="2026-03-12 16:22:12.86227415 +0000 UTC m=+1181.826236494" Mar 12 16:22:12 crc kubenswrapper[4687]: I0312 16:22:12.938415 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" podStartSLOduration=5.360315785 podStartE2EDuration="29.938401933s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:44.681519103 +0000 UTC m=+1153.645481437" lastFinishedPulling="2026-03-12 16:22:09.259605241 +0000 UTC m=+1178.223567585" observedRunningTime="2026-03-12 16:22:12.937423726 +0000 UTC m=+1181.901386070" watchObservedRunningTime="2026-03-12 16:22:12.938401933 +0000 UTC m=+1181.902364277" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.006490 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" podStartSLOduration=4.533908403 podStartE2EDuration="30.006462065s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.992285419 +0000 UTC m=+1154.956247763" lastFinishedPulling="2026-03-12 16:22:11.464839081 +0000 UTC m=+1180.428801425" observedRunningTime="2026-03-12 16:22:13.003775481 +0000 UTC m=+1181.967737825" watchObservedRunningTime="2026-03-12 16:22:13.006462065 +0000 UTC m=+1181.970424399" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.067416 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podStartSLOduration=4.688976046 podStartE2EDuration="30.06739916s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.99268479 +0000 UTC m=+1154.956647134" lastFinishedPulling="2026-03-12 16:22:11.371107904 +0000 UTC m=+1180.335070248" observedRunningTime="2026-03-12 16:22:13.065515098 +0000 UTC m=+1182.029477442" watchObservedRunningTime="2026-03-12 16:22:13.06739916 +0000 UTC m=+1182.031361504" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.118100 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podStartSLOduration=4.717312943 podStartE2EDuration="30.118083213s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.978516099 +0000 UTC m=+1154.942478443" lastFinishedPulling="2026-03-12 16:22:11.379286369 +0000 UTC m=+1180.343248713" observedRunningTime="2026-03-12 16:22:13.110212017 +0000 UTC m=+1182.074174371" watchObservedRunningTime="2026-03-12 16:22:13.118083213 +0000 UTC m=+1182.082045557" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.157030 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podStartSLOduration=6.425449518 podStartE2EDuration="30.157011703s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.527559043 +0000 UTC m=+1154.491521387" lastFinishedPulling="2026-03-12 16:22:09.259121228 +0000 UTC m=+1178.223083572" observedRunningTime="2026-03-12 16:22:13.137699052 +0000 UTC m=+1182.101661396" watchObservedRunningTime="2026-03-12 16:22:13.157011703 +0000 UTC m=+1182.120974037" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.190680 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podStartSLOduration=4.580603115 podStartE2EDuration="30.190595037s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.971418563 +0000 UTC m=+1154.935380907" lastFinishedPulling="2026-03-12 16:22:11.581410485 +0000 UTC m=+1180.545372829" observedRunningTime="2026-03-12 16:22:13.15758199 +0000 UTC m=+1182.121544334" watchObservedRunningTime="2026-03-12 16:22:13.190595037 +0000 UTC m=+1182.154557381" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.202971 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" podStartSLOduration=5.782219527 podStartE2EDuration="30.202953097s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:44.838678006 +0000 UTC m=+1153.802640390" lastFinishedPulling="2026-03-12 16:22:09.259411616 +0000 UTC m=+1178.223373960" observedRunningTime="2026-03-12 16:22:13.176596912 +0000 UTC m=+1182.140559256" watchObservedRunningTime="2026-03-12 16:22:13.202953097 +0000 UTC m=+1182.166915441" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.219641 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podStartSLOduration=7.072813848 podStartE2EDuration="30.219617865s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.527976565 +0000 UTC m=+1154.491938909" lastFinishedPulling="2026-03-12 16:22:08.674780582 +0000 UTC m=+1177.638742926" observedRunningTime="2026-03-12 16:22:13.199641736 +0000 UTC m=+1182.163604090" watchObservedRunningTime="2026-03-12 16:22:13.219617865 +0000 UTC m=+1182.183580209" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.818342 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" event={"ID":"64a70e69-432d-4ddc-8eef-e16f4e374c56","Type":"ContainerStarted","Data":"fcc55729a0a9ac125437dd5f4e28efbdf0ebbed6d6e5c8f231a516b851e1bdcf"} Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.818771 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.822274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" event={"ID":"25c786e4-0569-47e5-b4d6-db2631c509a6","Type":"ContainerStarted","Data":"d8957448fb78dd111b081ccd6ceaf0817d0165cd845e66f32c88f4cd81231f29"} Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.835479 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podStartSLOduration=2.618250949 podStartE2EDuration="30.835458366s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:44.835307044 +0000 UTC m=+1153.799269388" lastFinishedPulling="2026-03-12 16:22:13.052514461 +0000 UTC m=+1182.016476805" observedRunningTime="2026-03-12 16:22:13.832636109 +0000 UTC m=+1182.796598453" watchObservedRunningTime="2026-03-12 16:22:13.835458366 +0000 UTC m=+1182.799420710" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.849659 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" podStartSLOduration=12.525203753 podStartE2EDuration="13.849644046s" podCreationTimestamp="2026-03-12 16:22:00 +0000 UTC" firstStartedPulling="2026-03-12 16:22:11.87585466 +0000 UTC m=+1180.839816994" lastFinishedPulling="2026-03-12 16:22:13.200294943 +0000 UTC m=+1182.164257287" observedRunningTime="2026-03-12 16:22:13.844000851 +0000 UTC m=+1182.807963195" watchObservedRunningTime="2026-03-12 16:22:13.849644046 +0000 UTC m=+1182.813606380" Mar 12 16:22:13 crc kubenswrapper[4687]: I0312 16:22:13.858320 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podStartSLOduration=5.459206862 podStartE2EDuration="30.858300054s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.978728445 +0000 UTC m=+1154.942690789" lastFinishedPulling="2026-03-12 16:22:11.377821637 +0000 UTC m=+1180.341783981" observedRunningTime="2026-03-12 16:22:13.855714143 +0000 UTC m=+1182.819676487" watchObservedRunningTime="2026-03-12 16:22:13.858300054 +0000 UTC m=+1182.822262398" Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.121414 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.121472 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.121518 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.122181 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1ffdc3800e5c81c03048cac69c6e0d74a9a699fce31dea6098c57fea14e8d99"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.122264 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://f1ffdc3800e5c81c03048cac69c6e0d74a9a699fce31dea6098c57fea14e8d99" gracePeriod=600 Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.281055 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.839869 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="f1ffdc3800e5c81c03048cac69c6e0d74a9a699fce31dea6098c57fea14e8d99" exitCode=0 Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.839950 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"f1ffdc3800e5c81c03048cac69c6e0d74a9a699fce31dea6098c57fea14e8d99"} Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.840232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"a6a61ada4aa1cfe2e90a3b1ef14049216d76e56f45c1ef544713761e4d6a8f41"} Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.840254 4687 scope.go:117] "RemoveContainer" containerID="dd9807f39bd0fa78ee3cf947dc53d693272e6c942fe244d8ebc4e3c782d477d9" Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.842549 4687 generic.go:334] "Generic (PLEG): container finished" podID="25c786e4-0569-47e5-b4d6-db2631c509a6" containerID="d8957448fb78dd111b081ccd6ceaf0817d0165cd845e66f32c88f4cd81231f29" exitCode=0 Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.842631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" event={"ID":"25c786e4-0569-47e5-b4d6-db2631c509a6","Type":"ContainerDied","Data":"d8957448fb78dd111b081ccd6ceaf0817d0165cd845e66f32c88f4cd81231f29"} Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.844072 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" event={"ID":"1c9c8552-26b0-408f-bd09-40c74041cbfa","Type":"ContainerStarted","Data":"1e24499b81deef7d1562e7ac6bf1ec71e792987d29194e0eaf180ae0ab82bbe3"} Mar 12 16:22:14 crc kubenswrapper[4687]: I0312 16:22:14.889629 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podStartSLOduration=3.236106298 podStartE2EDuration="31.889612569s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.556457071 +0000 UTC m=+1154.520419415" lastFinishedPulling="2026-03-12 16:22:14.209963342 +0000 UTC m=+1183.173925686" observedRunningTime="2026-03-12 16:22:14.880334553 +0000 UTC m=+1183.844296907" watchObservedRunningTime="2026-03-12 16:22:14.889612569 +0000 UTC m=+1183.853574913" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.232689 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.244306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d0733d-5a4f-4b51-a95e-eb2cf8593545-cert\") pod \"infra-operator-controller-manager-5995f4446f-7cchw\" (UID: \"43d0733d-5a4f-4b51-a95e-eb2cf8593545\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.422201 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.640852 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.661655 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dc5ebdf2-e54a-4c66-abb7-35039f9226dc-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz\" (UID: \"dc5ebdf2-e54a-4c66-abb7-35039f9226dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.837010 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:22:15 crc kubenswrapper[4687]: I0312 16:22:15.872705 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw"] Mar 12 16:22:15 crc kubenswrapper[4687]: W0312 16:22:15.892525 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43d0733d_5a4f_4b51_a95e_eb2cf8593545.slice/crio-d9cb7c13713390990571b59b2f1d86b038df351a85f1a42330bb3fa38908cf79 WatchSource:0}: Error finding container d9cb7c13713390990571b59b2f1d86b038df351a85f1a42330bb3fa38908cf79: Status 404 returned error can't find the container with id d9cb7c13713390990571b59b2f1d86b038df351a85f1a42330bb3fa38908cf79 Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.152213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.152441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.158218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-webhook-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.158389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9af65423-8d26-4ff5-97ee-711dc0c4501b-metrics-certs\") pod \"openstack-operator-controller-manager-dbdf4d967-glnf2\" (UID: \"9af65423-8d26-4ff5-97ee-711dc0c4501b\") " pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.242016 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.355902 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjs4\" (UniqueName: \"kubernetes.io/projected/25c786e4-0569-47e5-b4d6-db2631c509a6-kube-api-access-9rjs4\") pod \"25c786e4-0569-47e5-b4d6-db2631c509a6\" (UID: \"25c786e4-0569-47e5-b4d6-db2631c509a6\") " Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.358154 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz"] Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.359825 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c786e4-0569-47e5-b4d6-db2631c509a6-kube-api-access-9rjs4" (OuterVolumeSpecName: "kube-api-access-9rjs4") pod "25c786e4-0569-47e5-b4d6-db2631c509a6" (UID: "25c786e4-0569-47e5-b4d6-db2631c509a6"). InnerVolumeSpecName "kube-api-access-9rjs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.381446 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.458214 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjs4\" (UniqueName: \"kubernetes.io/projected/25c786e4-0569-47e5-b4d6-db2631c509a6-kube-api-access-9rjs4\") on node \"crc\" DevicePath \"\"" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.837292 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2"] Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.894104 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" event={"ID":"dc5ebdf2-e54a-4c66-abb7-35039f9226dc","Type":"ContainerStarted","Data":"0301e74376da307ec5e83235e688abf0cd082947cfd26c1a718e0b3d00cb4145"} Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.896612 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" event={"ID":"43d0733d-5a4f-4b51-a95e-eb2cf8593545","Type":"ContainerStarted","Data":"d9cb7c13713390990571b59b2f1d86b038df351a85f1a42330bb3fa38908cf79"} Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.905014 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.905012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555542-9mtqp" event={"ID":"25c786e4-0569-47e5-b4d6-db2631c509a6","Type":"ContainerDied","Data":"bdb74cf02c065fb2435ec085f457f1f72e45438132baac44bda00e1b2259b21f"} Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.905138 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb74cf02c065fb2435ec085f457f1f72e45438132baac44bda00e1b2259b21f" Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.907283 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555536-zmn29"] Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.910709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" event={"ID":"9af65423-8d26-4ff5-97ee-711dc0c4501b","Type":"ContainerStarted","Data":"9e16fd1b5949e5c7d1532a8aecf705434611dc5975c770ed4259dd34c0172015"} Mar 12 16:22:16 crc kubenswrapper[4687]: I0312 16:22:16.919811 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555536-zmn29"] Mar 12 16:22:17 crc kubenswrapper[4687]: I0312 16:22:17.745052 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddd0dc3-9173-46ed-90ec-1e905ec6e821" path="/var/lib/kubelet/pods/bddd0dc3-9173-46ed-90ec-1e905ec6e821/volumes" Mar 12 16:22:17 crc kubenswrapper[4687]: I0312 16:22:17.923295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" event={"ID":"9af65423-8d26-4ff5-97ee-711dc0c4501b","Type":"ContainerStarted","Data":"95c4343e2035e1cd9235e1c34dd0cb452fb831329d867670f2d8eb04c1b2f4fd"} Mar 12 16:22:17 crc kubenswrapper[4687]: I0312 16:22:17.923618 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:17 crc kubenswrapper[4687]: I0312 16:22:17.950967 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podStartSLOduration=34.950950014 podStartE2EDuration="34.950950014s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:22:17.948147237 +0000 UTC m=+1186.912109591" watchObservedRunningTime="2026-03-12 16:22:17.950950014 +0000 UTC m=+1186.914912358" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.955574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" event={"ID":"616298f1-0baf-428d-9bb9-3a87f52085e8","Type":"ContainerStarted","Data":"51e26485b980245b59b03319ab39f39fe09888ced297f429fda717a1efc3c661"} Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.956403 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.957005 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" event={"ID":"dc5ebdf2-e54a-4c66-abb7-35039f9226dc","Type":"ContainerStarted","Data":"75b87336dba32d8044793a9adca94371c7650a3a21bb8352c59aa2893cafae65"} Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.957083 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.958318 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" event={"ID":"ecc97932-9eae-4d08-910b-b68e0e7d8002","Type":"ContainerStarted","Data":"c48ba5164766a452e3b04daf316a03727777eb478b9c5f50a19ce060413235b5"} Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.958496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.959956 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" event={"ID":"43d0733d-5a4f-4b51-a95e-eb2cf8593545","Type":"ContainerStarted","Data":"2fad8e90d08a0e61af0ff788e46212b70d203f4849c7aefa8a4806e3a834b435"} Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.960076 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.961525 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" event={"ID":"15c585dd-9efa-430b-aeb5-42eaeace0d18","Type":"ContainerStarted","Data":"14d9968e147b5fbf2f9417d1b2050c63de24bbf68ddf7898fd051e857f919df9"} Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.961677 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.963040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" event={"ID":"26adb4e9-0197-4023-b876-afbb572f93d8","Type":"ContainerStarted","Data":"432d2d1d0d1a7ab768a27590fdf60bb1273314361a17bdb5390cebf75e31ef9a"} Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.963216 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:22:21 crc kubenswrapper[4687]: I0312 16:22:21.984736 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podStartSLOduration=3.9486696610000003 podStartE2EDuration="38.984719396s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.978491248 +0000 UTC m=+1154.942453592" lastFinishedPulling="2026-03-12 16:22:21.014540983 +0000 UTC m=+1189.978503327" observedRunningTime="2026-03-12 16:22:21.97941947 +0000 UTC m=+1190.943381814" watchObservedRunningTime="2026-03-12 16:22:21.984719396 +0000 UTC m=+1190.948681740" Mar 12 16:22:22 crc kubenswrapper[4687]: I0312 16:22:22.020927 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" podStartSLOduration=3.568884866 podStartE2EDuration="39.020912631s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.565752996 +0000 UTC m=+1154.529715330" lastFinishedPulling="2026-03-12 16:22:21.017780751 +0000 UTC m=+1189.981743095" observedRunningTime="2026-03-12 16:22:22.0168614 +0000 UTC m=+1190.980823744" watchObservedRunningTime="2026-03-12 16:22:22.020912631 +0000 UTC m=+1190.984874975" Mar 12 16:22:22 crc kubenswrapper[4687]: I0312 16:22:22.023957 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podStartSLOduration=2.138611684 podStartE2EDuration="39.023949155s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:44.131154736 +0000 UTC m=+1153.095117080" lastFinishedPulling="2026-03-12 16:22:21.016492207 +0000 UTC m=+1189.980454551" observedRunningTime="2026-03-12 16:22:22.003672668 +0000 UTC m=+1190.967635012" watchObservedRunningTime="2026-03-12 16:22:22.023949155 +0000 UTC m=+1190.987911499" Mar 12 16:22:22 crc kubenswrapper[4687]: I0312 16:22:22.043258 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podStartSLOduration=34.369889951 podStartE2EDuration="39.043234205s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:22:16.36127777 +0000 UTC m=+1185.325240114" lastFinishedPulling="2026-03-12 16:22:21.034622024 +0000 UTC m=+1189.998584368" observedRunningTime="2026-03-12 16:22:22.041214879 +0000 UTC m=+1191.005177223" watchObservedRunningTime="2026-03-12 16:22:22.043234205 +0000 UTC m=+1191.007196549" Mar 12 16:22:22 crc kubenswrapper[4687]: I0312 16:22:22.055290 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podStartSLOduration=33.98911353 podStartE2EDuration="39.055274155s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:22:15.900473561 +0000 UTC m=+1184.864435905" lastFinishedPulling="2026-03-12 16:22:20.966634186 +0000 UTC m=+1189.930596530" observedRunningTime="2026-03-12 16:22:22.054058913 +0000 UTC m=+1191.018021257" watchObservedRunningTime="2026-03-12 16:22:22.055274155 +0000 UTC m=+1191.019236499" Mar 12 16:22:22 crc kubenswrapper[4687]: I0312 16:22:22.076088 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podStartSLOduration=4.033536435 podStartE2EDuration="39.076070377s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.976395801 +0000 UTC m=+1154.940358145" lastFinishedPulling="2026-03-12 16:22:21.018929743 +0000 UTC m=+1189.982892087" observedRunningTime="2026-03-12 16:22:22.07179139 +0000 UTC m=+1191.035753734" watchObservedRunningTime="2026-03-12 16:22:22.076070377 +0000 UTC m=+1191.040032721" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.426610 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.454004 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.477118 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.582330 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.677882 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.707734 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.845034 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.863723 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.867543 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.982304 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" event={"ID":"79d0c51f-999a-4e39-b6b5-aecf10472a4c","Type":"ContainerStarted","Data":"f2a1077fa45434f636929cc923217bbb5814ca4b87306c2a7a917546f21a4b47"} Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.982902 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:22:23 crc kubenswrapper[4687]: I0312 16:22:23.996081 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" podStartSLOduration=3.747398634 podStartE2EDuration="40.996063885s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.555580306 +0000 UTC m=+1154.519542650" lastFinishedPulling="2026-03-12 16:22:22.804245557 +0000 UTC m=+1191.768207901" observedRunningTime="2026-03-12 16:22:23.99479617 +0000 UTC m=+1192.958758524" watchObservedRunningTime="2026-03-12 16:22:23.996063885 +0000 UTC m=+1192.960026229" Mar 12 16:22:24 crc kubenswrapper[4687]: I0312 16:22:24.004011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 16:22:24 crc kubenswrapper[4687]: I0312 16:22:24.084074 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 16:22:24 crc kubenswrapper[4687]: I0312 16:22:24.284793 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 16:22:24 crc kubenswrapper[4687]: I0312 16:22:24.310821 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 16:22:24 crc kubenswrapper[4687]: I0312 16:22:24.471530 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 16:22:25 crc kubenswrapper[4687]: I0312 16:22:25.001033 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" event={"ID":"066b8087-d58d-4c75-a4bb-4a4b26710855","Type":"ContainerStarted","Data":"067176c31217f6764be8756d35108369c6e2f6959023a80443d3d0a837766b1c"} Mar 12 16:22:25 crc kubenswrapper[4687]: I0312 16:22:25.001856 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:22:25 crc kubenswrapper[4687]: I0312 16:22:25.017502 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podStartSLOduration=3.133372682 podStartE2EDuration="42.017487277s" podCreationTimestamp="2026-03-12 16:21:43 +0000 UTC" firstStartedPulling="2026-03-12 16:21:45.931307877 +0000 UTC m=+1154.895270211" lastFinishedPulling="2026-03-12 16:22:24.815422452 +0000 UTC m=+1193.779384806" observedRunningTime="2026-03-12 16:22:25.016620523 +0000 UTC m=+1193.980582867" watchObservedRunningTime="2026-03-12 16:22:25.017487277 +0000 UTC m=+1193.981449621" Mar 12 16:22:26 crc kubenswrapper[4687]: I0312 16:22:26.391011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 16:22:33 crc kubenswrapper[4687]: I0312 16:22:33.387786 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 16:22:33 crc kubenswrapper[4687]: I0312 16:22:33.692157 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 16:22:33 crc kubenswrapper[4687]: I0312 16:22:33.890077 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 16:22:34 crc kubenswrapper[4687]: I0312 16:22:34.097894 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 16:22:34 crc kubenswrapper[4687]: I0312 16:22:34.343548 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 16:22:34 crc kubenswrapper[4687]: I0312 16:22:34.409082 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 16:22:35 crc kubenswrapper[4687]: I0312 16:22:35.429397 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 16:22:35 crc kubenswrapper[4687]: I0312 16:22:35.842206 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.902645 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdnp7"] Mar 12 16:22:53 crc kubenswrapper[4687]: E0312 16:22:53.903634 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c786e4-0569-47e5-b4d6-db2631c509a6" containerName="oc" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.903650 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c786e4-0569-47e5-b4d6-db2631c509a6" containerName="oc" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.903865 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c786e4-0569-47e5-b4d6-db2631c509a6" containerName="oc" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.905039 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.910925 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.911463 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.911799 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.912029 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bzgmz" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.914883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmb8\" (UniqueName: \"kubernetes.io/projected/bac824fc-109e-4d83-8124-9bd315b2bbac-kube-api-access-dmmb8\") pod \"dnsmasq-dns-675f4bcbfc-pdnp7\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.915232 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac824fc-109e-4d83-8124-9bd315b2bbac-config\") pod \"dnsmasq-dns-675f4bcbfc-pdnp7\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.924210 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdnp7"] Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.971683 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gkhhd"] Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.973160 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.979597 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 16:22:53 crc kubenswrapper[4687]: I0312 16:22:53.987088 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gkhhd"] Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.017016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac824fc-109e-4d83-8124-9bd315b2bbac-config\") pod \"dnsmasq-dns-675f4bcbfc-pdnp7\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.017101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmb8\" (UniqueName: \"kubernetes.io/projected/bac824fc-109e-4d83-8124-9bd315b2bbac-kube-api-access-dmmb8\") pod \"dnsmasq-dns-675f4bcbfc-pdnp7\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.017958 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac824fc-109e-4d83-8124-9bd315b2bbac-config\") pod \"dnsmasq-dns-675f4bcbfc-pdnp7\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.035721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmb8\" (UniqueName: \"kubernetes.io/projected/bac824fc-109e-4d83-8124-9bd315b2bbac-kube-api-access-dmmb8\") pod \"dnsmasq-dns-675f4bcbfc-pdnp7\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.118745 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgl5\" (UniqueName: \"kubernetes.io/projected/05bc5887-8df5-4969-9874-1a2ac5c761a0-kube-api-access-mqgl5\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.119184 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-config\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.119326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.221454 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-config\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.221543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.221631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqgl5\" (UniqueName: \"kubernetes.io/projected/05bc5887-8df5-4969-9874-1a2ac5c761a0-kube-api-access-mqgl5\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.222306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.222323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-config\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.229353 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.249565 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqgl5\" (UniqueName: \"kubernetes.io/projected/05bc5887-8df5-4969-9874-1a2ac5c761a0-kube-api-access-mqgl5\") pod \"dnsmasq-dns-78dd6ddcc-gkhhd\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.292747 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.725828 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdnp7"] Mar 12 16:22:54 crc kubenswrapper[4687]: I0312 16:22:54.832539 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gkhhd"] Mar 12 16:22:55 crc kubenswrapper[4687]: I0312 16:22:55.270940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" event={"ID":"05bc5887-8df5-4969-9874-1a2ac5c761a0","Type":"ContainerStarted","Data":"95fe240090525dee9126c62d2194a24984481e7cc0723933b382962014746f99"} Mar 12 16:22:55 crc kubenswrapper[4687]: I0312 16:22:55.272444 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" event={"ID":"bac824fc-109e-4d83-8124-9bd315b2bbac","Type":"ContainerStarted","Data":"c025e2c6d630f6ddb3cb66f3413ce7b9df3afc3f8655fdb448fc482412cd1917"} Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.740992 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdnp7"] Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.767445 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-24427"] Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.770309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.784536 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-24427"] Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.872542 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-config\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.872608 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjtw\" (UniqueName: \"kubernetes.io/projected/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-kube-api-access-bnjtw\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.872670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.974809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.974915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-config\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.974958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjtw\" (UniqueName: \"kubernetes.io/projected/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-kube-api-access-bnjtw\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.976060 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:56 crc kubenswrapper[4687]: I0312 16:22:56.976855 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-config\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.001545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjtw\" (UniqueName: \"kubernetes.io/projected/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-kube-api-access-bnjtw\") pod \"dnsmasq-dns-5ccc8479f9-24427\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.008285 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gkhhd"] Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.039200 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99lwk"] Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.042225 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.047186 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99lwk"] Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.075706 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-config\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.075762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.075990 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4lv\" (UniqueName: \"kubernetes.io/projected/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-kube-api-access-nt4lv\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.167293 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.176996 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-config\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.177054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.177146 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4lv\" (UniqueName: \"kubernetes.io/projected/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-kube-api-access-nt4lv\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.179616 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.181707 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-config\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.210069 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4lv\" (UniqueName: \"kubernetes.io/projected/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-kube-api-access-nt4lv\") pod \"dnsmasq-dns-57d769cc4f-99lwk\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.371203 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.722698 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-24427"] Mar 12 16:22:57 crc kubenswrapper[4687]: W0312 16:22:57.730016 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052c84a6_4d8a_49bf_9afe_3586b8f55c5a.slice/crio-9848851d2640beab53e3cbf6a03217d611098011a6a282eaeefd1e96466cee63 WatchSource:0}: Error finding container 9848851d2640beab53e3cbf6a03217d611098011a6a282eaeefd1e96466cee63: Status 404 returned error can't find the container with id 9848851d2640beab53e3cbf6a03217d611098011a6a282eaeefd1e96466cee63 Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.908810 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.910540 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.919655 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.919834 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.919934 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.920054 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.920151 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jw674" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.928947 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.929131 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.953954 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:22:57 crc kubenswrapper[4687]: I0312 16:22:57.969234 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99lwk"] Mar 12 16:22:58 crc kubenswrapper[4687]: W0312 16:22:57.998519 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafcaaea3_4a38_4006_9cf6_b2a5f92122c2.slice/crio-5bd95ba8b814645d764a68fcfa1dd97cb71e720c2b76400f33f982b2f2db19ab WatchSource:0}: Error finding container 5bd95ba8b814645d764a68fcfa1dd97cb71e720c2b76400f33f982b2f2db19ab: Status 404 returned error can't find the container with id 5bd95ba8b814645d764a68fcfa1dd97cb71e720c2b76400f33f982b2f2db19ab Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.030663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.030773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfpr\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-kube-api-access-pbfpr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.030823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.030893 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.030921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.031016 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d83f3b6-81ea-457d-815e-22eab66d4058-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.031064 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.031100 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.031131 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.031152 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.031190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d83f3b6-81ea-457d-815e-22eab66d4058-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135195 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d83f3b6-81ea-457d-815e-22eab66d4058-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135297 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbfpr\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-kube-api-access-pbfpr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d83f3b6-81ea-457d-815e-22eab66d4058-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135494 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135515 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.135532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.137317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.137657 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.138256 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.138594 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.138629 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aab5bd533d8baf2f941bc7008b1a1aa0f57238177d0ea423d0fb2976721576da/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.140937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.141603 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.142127 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.142588 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d83f3b6-81ea-457d-815e-22eab66d4058-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.148417 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d83f3b6-81ea-457d-815e-22eab66d4058-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.149098 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.151486 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.153081 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.157829 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.158337 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vlj65" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.159258 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.159681 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.159917 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.160301 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.161582 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.164625 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbfpr\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-kube-api-access-pbfpr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.166786 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.168236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.184460 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.184652 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.223771 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.252590 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.254132 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.255035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.275262 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.310185 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" event={"ID":"052c84a6-4d8a-49bf-9afe-3586b8f55c5a","Type":"ContainerStarted","Data":"9848851d2640beab53e3cbf6a03217d611098011a6a282eaeefd1e96466cee63"} Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.311422 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" event={"ID":"afcaaea3-4a38-4006-9cf6-b2a5f92122c2","Type":"ContainerStarted","Data":"5bd95ba8b814645d764a68fcfa1dd97cb71e720c2b76400f33f982b2f2db19ab"} Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338197 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338256 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzqw\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-kube-api-access-hxzqw\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338284 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338432 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338455 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338481 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338584 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338627 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbr7\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-kube-api-access-4vbr7\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338668 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338704 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338720 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-config-data\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338738 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-server-conf\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ee0a72c-d057-4042-8611-c509c4ed3edf-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338824 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338913 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ee0a72c-d057-4042-8611-c509c4ed3edf-pod-info\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.338936 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.444063 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ee0a72c-d057-4042-8611-c509c4ed3edf-pod-info\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445580 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bst5n\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-kube-api-access-bst5n\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445656 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445705 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxzqw\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-kube-api-access-hxzqw\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445727 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445808 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445839 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445932 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbr7\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-kube-api-access-4vbr7\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445969 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-config-data\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.445991 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e395399-5a85-445f-9b56-4036687b73b6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e395399-5a85-445f-9b56-4036687b73b6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-config-data\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446096 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446113 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446131 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446151 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-server-conf\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446168 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446188 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ee0a72c-d057-4042-8611-c509c4ed3edf-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.446206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.460236 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.462667 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.462719 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46a24cb589ac69273944e258cc5d0edab0c26fdb6edbe87b89c93929adde0f2a/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.463574 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.464070 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.464293 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-config-data\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.464557 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.464618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.464849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.465217 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.465654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.468307 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.469154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.470503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-server-conf\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.474187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ee0a72c-d057-4042-8611-c509c4ed3edf-pod-info\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.477271 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.478093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxzqw\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-kube-api-access-hxzqw\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.481525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ee0a72c-d057-4042-8611-c509c4ed3edf-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.485201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.486495 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.502831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.505277 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbr7\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-kube-api-access-4vbr7\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.512386 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.532202 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.532636 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bdd724b6c4e3941fdc396afd4b3640dbe7d2e939f05c888d1f1536d2ca37cbe0/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bst5n\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-kube-api-access-bst5n\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548126 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548171 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548274 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548296 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548320 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-config-data\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e395399-5a85-445f-9b56-4036687b73b6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548390 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e395399-5a85-445f-9b56-4036687b73b6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.548453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.549584 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.551960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.552238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.552610 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.553249 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.556494 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e395399-5a85-445f-9b56-4036687b73b6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.556522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e395399-5a85-445f-9b56-4036687b73b6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.557571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-config-data\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.558072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.559212 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.559244 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2124df05dbbe8a852954440b466035202297df9f813faaec4e3f5c591c134b4b/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.560847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.576644 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bst5n\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-kube-api-access-bst5n\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.577525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.595605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " pod="openstack/rabbitmq-server-1" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.799070 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:22:58 crc kubenswrapper[4687]: W0312 16:22:58.827842 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d83f3b6_81ea_457d_815e_22eab66d4058.slice/crio-42c12be576425074aa5393de159ab517b138bb990d79929ad4aa09a1b4f5eb3a WatchSource:0}: Error finding container 42c12be576425074aa5393de159ab517b138bb990d79929ad4aa09a1b4f5eb3a: Status 404 returned error can't find the container with id 42c12be576425074aa5393de159ab517b138bb990d79929ad4aa09a1b4f5eb3a Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.842929 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 16:22:58 crc kubenswrapper[4687]: I0312 16:22:58.871287 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.073490 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.333182 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d83f3b6-81ea-457d-815e-22eab66d4058","Type":"ContainerStarted","Data":"42c12be576425074aa5393de159ab517b138bb990d79929ad4aa09a1b4f5eb3a"} Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.480569 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.482046 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.486214 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-w8xgp" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.526296 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.526526 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.526605 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.528579 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.526685 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.579198 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.579311 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jgn\" (UniqueName: \"kubernetes.io/projected/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-kube-api-access-k7jgn\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.579470 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.579519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.583267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.583493 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.583526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.583834 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.686886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.686938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.686974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.687091 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.687219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jgn\" (UniqueName: \"kubernetes.io/projected/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-kube-api-access-k7jgn\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.687296 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.687325 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.687416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.687581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.688971 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.693477 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.695051 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.703529 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.721692 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.724522 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24bf43d0d2f94384f2629b37217e299d8b8718932e545fff40a86e7f19e1d111/globalmount\"" pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.723573 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.723940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jgn\" (UniqueName: \"kubernetes.io/projected/f2f2ec7e-fcd2-4749-9f00-ffe100081b84-kube-api-access-k7jgn\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.770222 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a309bd27-817b-42d4-a305-d2c45ce0aa4c\") pod \"openstack-galera-0\" (UID: \"f2f2ec7e-fcd2-4749-9f00-ffe100081b84\") " pod="openstack/openstack-galera-0" Mar 12 16:22:59 crc kubenswrapper[4687]: I0312 16:22:59.821586 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.376737 4687 scope.go:117] "RemoveContainer" containerID="6e8c67aeb86ec6a857badf690a65f0ea723851d6c36d019af73f80bbfc16ee1e" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.733669 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.736655 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.740043 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.740322 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.740567 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.740694 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dbqm6" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.754041 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.819985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.820078 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a742c8fb-2af2-4192-bf5a-475f472b323a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.820101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a742c8fb-2af2-4192-bf5a-475f472b323a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.820123 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbw7\" (UniqueName: \"kubernetes.io/projected/a742c8fb-2af2-4192-bf5a-475f472b323a-kube-api-access-pjbw7\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.820195 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a742c8fb-2af2-4192-bf5a-475f472b323a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.820229 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.820292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.821574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.923718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.923829 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.924064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a742c8fb-2af2-4192-bf5a-475f472b323a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.924088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a742c8fb-2af2-4192-bf5a-475f472b323a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.924118 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbw7\" (UniqueName: \"kubernetes.io/projected/a742c8fb-2af2-4192-bf5a-475f472b323a-kube-api-access-pjbw7\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.924156 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a742c8fb-2af2-4192-bf5a-475f472b323a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.924203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.924229 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.926075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.926983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.928427 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a742c8fb-2af2-4192-bf5a-475f472b323a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.933604 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.933828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a742c8fb-2af2-4192-bf5a-475f472b323a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.935433 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e911713e2fb87ac584c1cf50241d10e45783028b2a905cb22b33f0d3c015cb73/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.936215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a742c8fb-2af2-4192-bf5a-475f472b323a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.947513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a742c8fb-2af2-4192-bf5a-475f472b323a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.952265 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbw7\" (UniqueName: \"kubernetes.io/projected/a742c8fb-2af2-4192-bf5a-475f472b323a-kube-api-access-pjbw7\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.993908 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87c17318-7109-43af-8e72-3e3ce45d2b48\") pod \"openstack-cell1-galera-0\" (UID: \"a742c8fb-2af2-4192-bf5a-475f472b323a\") " pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.997488 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 16:23:00 crc kubenswrapper[4687]: I0312 16:23:00.998918 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.005256 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.005502 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.005650 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-s7k2m" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.032440 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.061761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.129605 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5aa64e0-72c4-4b44-8912-145bd488d369-config-data\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.129660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa64e0-72c4-4b44-8912-145bd488d369-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.129714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5aa64e0-72c4-4b44-8912-145bd488d369-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.129734 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5aa64e0-72c4-4b44-8912-145bd488d369-kolla-config\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.129755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxkp5\" (UniqueName: \"kubernetes.io/projected/e5aa64e0-72c4-4b44-8912-145bd488d369-kube-api-access-xxkp5\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.231687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5aa64e0-72c4-4b44-8912-145bd488d369-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.231732 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5aa64e0-72c4-4b44-8912-145bd488d369-kolla-config\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.231761 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxkp5\" (UniqueName: \"kubernetes.io/projected/e5aa64e0-72c4-4b44-8912-145bd488d369-kube-api-access-xxkp5\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.231884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5aa64e0-72c4-4b44-8912-145bd488d369-config-data\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.231921 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa64e0-72c4-4b44-8912-145bd488d369-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.233014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5aa64e0-72c4-4b44-8912-145bd488d369-kolla-config\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.233157 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5aa64e0-72c4-4b44-8912-145bd488d369-config-data\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.238744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5aa64e0-72c4-4b44-8912-145bd488d369-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.247350 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5aa64e0-72c4-4b44-8912-145bd488d369-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.249131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxkp5\" (UniqueName: \"kubernetes.io/projected/e5aa64e0-72c4-4b44-8912-145bd488d369-kube-api-access-xxkp5\") pod \"memcached-0\" (UID: \"e5aa64e0-72c4-4b44-8912-145bd488d369\") " pod="openstack/memcached-0" Mar 12 16:23:01 crc kubenswrapper[4687]: I0312 16:23:01.341051 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.313964 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.316179 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.328942 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-znc64" Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.342520 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.377979 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnsn\" (UniqueName: \"kubernetes.io/projected/b458feda-ef86-41d9-a1bc-3091254b086c-kube-api-access-bbnsn\") pod \"kube-state-metrics-0\" (UID: \"b458feda-ef86-41d9-a1bc-3091254b086c\") " pod="openstack/kube-state-metrics-0" Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.483871 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnsn\" (UniqueName: \"kubernetes.io/projected/b458feda-ef86-41d9-a1bc-3091254b086c-kube-api-access-bbnsn\") pod \"kube-state-metrics-0\" (UID: \"b458feda-ef86-41d9-a1bc-3091254b086c\") " pod="openstack/kube-state-metrics-0" Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.528006 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnsn\" (UniqueName: \"kubernetes.io/projected/b458feda-ef86-41d9-a1bc-3091254b086c-kube-api-access-bbnsn\") pod \"kube-state-metrics-0\" (UID: \"b458feda-ef86-41d9-a1bc-3091254b086c\") " pod="openstack/kube-state-metrics-0" Mar 12 16:23:03 crc kubenswrapper[4687]: I0312 16:23:03.659473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.279733 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.280967 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.290669 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-wsghk" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.290990 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.302931 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.406726 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqksg\" (UniqueName: \"kubernetes.io/projected/3a1437be-06a2-43c0-9ae4-6be8a822e466-kube-api-access-fqksg\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.407074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1437be-06a2-43c0-9ae4-6be8a822e466-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.510489 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1437be-06a2-43c0-9ae4-6be8a822e466-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.510645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqksg\" (UniqueName: \"kubernetes.io/projected/3a1437be-06a2-43c0-9ae4-6be8a822e466-kube-api-access-fqksg\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:04 crc kubenswrapper[4687]: E0312 16:23:04.510695 4687 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Mar 12 16:23:04 crc kubenswrapper[4687]: E0312 16:23:04.510783 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a1437be-06a2-43c0-9ae4-6be8a822e466-serving-cert podName:3a1437be-06a2-43c0-9ae4-6be8a822e466 nodeName:}" failed. No retries permitted until 2026-03-12 16:23:05.010757881 +0000 UTC m=+1233.974720225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a1437be-06a2-43c0-9ae4-6be8a822e466-serving-cert") pod "observability-ui-dashboards-66cbf594b5-mtdkz" (UID: "3a1437be-06a2-43c0-9ae4-6be8a822e466") : secret "observability-ui-dashboards" not found Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.540393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqksg\" (UniqueName: \"kubernetes.io/projected/3a1437be-06a2-43c0-9ae4-6be8a822e466-kube-api-access-fqksg\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.628224 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c7b658b6f-wgtnl"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.633141 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.660418 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.663291 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.676013 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.676300 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.676598 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.677632 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.678952 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.679013 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-z5sdp" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.680805 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.693611 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.716691 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c7b658b6f-wgtnl"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.718883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-oauth-serving-cert\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.718938 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.718960 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-config\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.718984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-oauth-config\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-service-ca\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719033 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-trusted-ca-bundle\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719072 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-config\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719104 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719151 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7pj\" (UniqueName: \"kubernetes.io/projected/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-kube-api-access-vb7pj\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8m65\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-kube-api-access-k8m65\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719252 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b24db29f-d8ed-49ff-8f32-612345208003-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-serving-cert\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719355 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.719404 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.754264 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.822105 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7pj\" (UniqueName: \"kubernetes.io/projected/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-kube-api-access-vb7pj\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823061 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8m65\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-kube-api-access-k8m65\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823127 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b24db29f-d8ed-49ff-8f32-612345208003-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823199 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-serving-cert\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823234 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823510 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-oauth-serving-cert\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-config\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823628 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823658 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-oauth-config\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823735 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-service-ca\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823794 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-trusted-ca-bundle\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-config\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.823868 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.834808 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.836377 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.840432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-config\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.841972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.843872 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-trusted-ca-bundle\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.845531 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-service-ca\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.845568 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-oauth-serving-cert\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.846204 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.862007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.874126 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.874157 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-oauth-config\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.874239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-config\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.874375 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.874899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b24db29f-d8ed-49ff-8f32-612345208003-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.875207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-console-serving-cert\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.875565 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.893759 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0120afcda32ee9517420f0f0356e40661dba3e83466953a4986803f80a521621/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.878295 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7pj\" (UniqueName: \"kubernetes.io/projected/19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f-kube-api-access-vb7pj\") pod \"console-6c7b658b6f-wgtnl\" (UID: \"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f\") " pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.878607 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8m65\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-kube-api-access-k8m65\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.945581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.967492 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:04 crc kubenswrapper[4687]: I0312 16:23:04.998638 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.030451 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1437be-06a2-43c0-9ae4-6be8a822e466-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.036306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1437be-06a2-43c0-9ae4-6be8a822e466-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mtdkz\" (UID: \"3a1437be-06a2-43c0-9ae4-6be8a822e466\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.202112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" Mar 12 16:23:05 crc kubenswrapper[4687]: W0312 16:23:05.700947 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ae78a70_0a58_4f39_977b_a4e5ce5b981d.slice/crio-d5bca58032342dd2bffd6685022680f009df6193fb433ca773b747c5316e7fba WatchSource:0}: Error finding container d5bca58032342dd2bffd6685022680f009df6193fb433ca773b747c5316e7fba: Status 404 returned error can't find the container with id d5bca58032342dd2bffd6685022680f009df6193fb433ca773b747c5316e7fba Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.784023 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9x5hb"] Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.790733 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.795546 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.795749 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8n8bx" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.795876 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.843850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c43d5a9-eafe-4910-acf5-0502509982b3-combined-ca-bundle\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.843924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-run-ovn\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.843999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8rc\" (UniqueName: \"kubernetes.io/projected/7c43d5a9-eafe-4910-acf5-0502509982b3-kube-api-access-zd8rc\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.844023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c43d5a9-eafe-4910-acf5-0502509982b3-scripts\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.844080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-log-ovn\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.844659 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9x5hb"] Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.844167 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-run\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.845090 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c43d5a9-eafe-4910-acf5-0502509982b3-ovn-controller-tls-certs\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.914459 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jwnk7"] Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.916780 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.924268 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jwnk7"] Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.946107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-run\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.946386 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-etc-ovs\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.946489 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c43d5a9-eafe-4910-acf5-0502509982b3-ovn-controller-tls-certs\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.946560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c43d5a9-eafe-4910-acf5-0502509982b3-combined-ca-bundle\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.947582 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-log\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.947791 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-run-ovn\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.947939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-lib\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948036 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-run-ovn\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.947219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-run\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-run\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948446 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8rc\" (UniqueName: \"kubernetes.io/projected/7c43d5a9-eafe-4910-acf5-0502509982b3-kube-api-access-zd8rc\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c43d5a9-eafe-4910-acf5-0502509982b3-scripts\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-log-ovn\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4k7\" (UniqueName: \"kubernetes.io/projected/24d69a73-06c7-48b5-9479-7816c969dafc-kube-api-access-vh4k7\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.948647 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d69a73-06c7-48b5-9479-7816c969dafc-scripts\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.949726 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c43d5a9-eafe-4910-acf5-0502509982b3-var-log-ovn\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.950809 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c43d5a9-eafe-4910-acf5-0502509982b3-scripts\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.952464 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c43d5a9-eafe-4910-acf5-0502509982b3-ovn-controller-tls-certs\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.953582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c43d5a9-eafe-4910-acf5-0502509982b3-combined-ca-bundle\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:05 crc kubenswrapper[4687]: I0312 16:23:05.975973 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8rc\" (UniqueName: \"kubernetes.io/projected/7c43d5a9-eafe-4910-acf5-0502509982b3-kube-api-access-zd8rc\") pod \"ovn-controller-9x5hb\" (UID: \"7c43d5a9-eafe-4910-acf5-0502509982b3\") " pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.051410 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-log\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.052846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-lib\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.052894 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-run\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.053013 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4k7\" (UniqueName: \"kubernetes.io/projected/24d69a73-06c7-48b5-9479-7816c969dafc-kube-api-access-vh4k7\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.053067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d69a73-06c7-48b5-9479-7816c969dafc-scripts\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.053200 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-etc-ovs\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.053515 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-etc-ovs\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.051736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-log\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.053807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-lib\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.053884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24d69a73-06c7-48b5-9479-7816c969dafc-var-run\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.057470 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d69a73-06c7-48b5-9479-7816c969dafc-scripts\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.088214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4k7\" (UniqueName: \"kubernetes.io/projected/24d69a73-06c7-48b5-9479-7816c969dafc-kube-api-access-vh4k7\") pod \"ovn-controller-ovs-jwnk7\" (UID: \"24d69a73-06c7-48b5-9479-7816c969dafc\") " pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.200759 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.219179 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.263478 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.447831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ae78a70-0a58-4f39-977b-a4e5ce5b981d","Type":"ContainerStarted","Data":"d5bca58032342dd2bffd6685022680f009df6193fb433ca773b747c5316e7fba"} Mar 12 16:23:06 crc kubenswrapper[4687]: I0312 16:23:06.449584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"4ee0a72c-d057-4042-8611-c509c4ed3edf","Type":"ContainerStarted","Data":"9c88622f4a366698fe8db794f3a9b5e12666021208a468d8950f4432351f7e4e"} Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.366720 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.374569 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.377923 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.410553 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.410669 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.410862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.411028 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.414823 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ch7cf" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554066 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554115 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89e3ebea-b275-4366-87da-918af6ae6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89e3ebea-b275-4366-87da-918af6ae6e38\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554167 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-config\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554223 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554305 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554658 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vl5\" (UniqueName: \"kubernetes.io/projected/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-kube-api-access-j6vl5\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.554840 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.559649 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.564231 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.567410 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.567593 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.567704 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.569052 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9pdjq" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.581317 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657065 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657247 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6vl5\" (UniqueName: \"kubernetes.io/projected/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-kube-api-access-j6vl5\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657272 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657332 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657420 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657461 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89e3ebea-b275-4366-87da-918af6ae6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89e3ebea-b275-4366-87da-918af6ae6e38\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.657488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-config\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.658197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-config\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.658573 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.662408 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.662934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.663660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.665672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.667659 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.667698 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89e3ebea-b275-4366-87da-918af6ae6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89e3ebea-b275-4366-87da-918af6ae6e38\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a47e14e4e446a24b1d757b6267f47b56a31de48c6ae72e9f8428115d81fe4be4/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.672980 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6vl5\" (UniqueName: \"kubernetes.io/projected/319f1aa4-c9b0-4424-aba2-3f8fa4c36257-kube-api-access-j6vl5\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.705920 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89e3ebea-b275-4366-87da-918af6ae6e38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89e3ebea-b275-4366-87da-918af6ae6e38\") pod \"ovsdbserver-nb-0\" (UID: \"319f1aa4-c9b0-4424-aba2-3f8fa4c36257\") " pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.729938 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.758851 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85z26\" (UniqueName: \"kubernetes.io/projected/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-kube-api-access-85z26\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.759519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11189dbd-0253-438a-8ca1-3f991bf00942\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11189dbd-0253-438a-8ca1-3f991bf00942\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.759664 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.759814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.760004 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.760156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.760272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.760427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.864426 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.864770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85z26\" (UniqueName: \"kubernetes.io/projected/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-kube-api-access-85z26\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.864928 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11189dbd-0253-438a-8ca1-3f991bf00942\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11189dbd-0253-438a-8ca1-3f991bf00942\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.865406 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.865474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.865497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.865577 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.865626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.866065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.866296 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.866304 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.866764 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.866800 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11189dbd-0253-438a-8ca1-3f991bf00942\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11189dbd-0253-438a-8ca1-3f991bf00942\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5e4f83d5ce134aaa17b22013b817c30559a59df1f19855eac8853db27cf7be6/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.869794 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.869936 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.870423 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.882948 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85z26\" (UniqueName: \"kubernetes.io/projected/4a92d9c0-0e63-4a78-bd77-a79cbc20449b-kube-api-access-85z26\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:10 crc kubenswrapper[4687]: I0312 16:23:10.900020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11189dbd-0253-438a-8ca1-3f991bf00942\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11189dbd-0253-438a-8ca1-3f991bf00942\") pod \"ovsdbserver-sb-0\" (UID: \"4a92d9c0-0e63-4a78-bd77-a79cbc20449b\") " pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:11 crc kubenswrapper[4687]: I0312 16:23:11.190344 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:16 crc kubenswrapper[4687]: I0312 16:23:16.163353 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 16:23:16 crc kubenswrapper[4687]: I0312 16:23:16.323971 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.549934 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.550254 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnjtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-24427_openstack(052c84a6-4d8a-49bf-9afe-3586b8f55c5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.552588 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" Mar 12 16:23:16 crc kubenswrapper[4687]: I0312 16:23:16.570239 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4e395399-5a85-445f-9b56-4036687b73b6","Type":"ContainerStarted","Data":"d7869bc2279b98a0de8bb2b6acfce73b67a26fc7645d4a93ef4c1a0e669db224"} Mar 12 16:23:16 crc kubenswrapper[4687]: W0312 16:23:16.591890 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda742c8fb_2af2_4192_bf5a_475f472b323a.slice/crio-e0aaa92bfa966610d2aa8980424294b50c6b75a37aeeff2d869c8b9283ff64dc WatchSource:0}: Error finding container e0aaa92bfa966610d2aa8980424294b50c6b75a37aeeff2d869c8b9283ff64dc: Status 404 returned error can't find the container with id e0aaa92bfa966610d2aa8980424294b50c6b75a37aeeff2d869c8b9283ff64dc Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.601934 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.602485 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt4lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-99lwk_openstack(afcaaea3-4a38-4006-9cf6-b2a5f92122c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.603702 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.605055 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.605252 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmmb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pdnp7_openstack(bac824fc-109e-4d83-8124-9bd315b2bbac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.606549 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" podUID="bac824fc-109e-4d83-8124-9bd315b2bbac" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.630285 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.630540 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqgl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-gkhhd_openstack(05bc5887-8df5-4969-9874-1a2ac5c761a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:23:16 crc kubenswrapper[4687]: E0312 16:23:16.631926 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" podUID="05bc5887-8df5-4969-9874-1a2ac5c761a0" Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.319612 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.332672 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c7b658b6f-wgtnl"] Mar 12 16:23:17 crc kubenswrapper[4687]: W0312 16:23:17.336592 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b5ecc0_df37_4bf5_b4a4_3bdca5826d2f.slice/crio-1a8e6bf4b081d69df656e13131d3d161b86a17789298c0ff052c52fb6837a586 WatchSource:0}: Error finding container 1a8e6bf4b081d69df656e13131d3d161b86a17789298c0ff052c52fb6837a586: Status 404 returned error can't find the container with id 1a8e6bf4b081d69df656e13131d3d161b86a17789298c0ff052c52fb6837a586 Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.586696 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a742c8fb-2af2-4192-bf5a-475f472b323a","Type":"ContainerStarted","Data":"e0aaa92bfa966610d2aa8980424294b50c6b75a37aeeff2d869c8b9283ff64dc"} Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.588492 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b458feda-ef86-41d9-a1bc-3091254b086c","Type":"ContainerStarted","Data":"71a2c20eeefadda1ab3cf20a89a397260cab32c3cc9d70a2b4a27ed33fc76c47"} Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.589560 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e5aa64e0-72c4-4b44-8912-145bd488d369","Type":"ContainerStarted","Data":"82c0e43042180f0adace8d99f6bd0fe8774c306b0066f351ecc8e2ba1d9bd764"} Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.593615 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c7b658b6f-wgtnl" event={"ID":"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f","Type":"ContainerStarted","Data":"5cd606511b68f2e1e6cf249862a9f03a5baf74b6810394414fc5431f837a1c6c"} Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.593642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c7b658b6f-wgtnl" event={"ID":"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f","Type":"ContainerStarted","Data":"1a8e6bf4b081d69df656e13131d3d161b86a17789298c0ff052c52fb6837a586"} Mar 12 16:23:17 crc kubenswrapper[4687]: E0312 16:23:17.594988 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" Mar 12 16:23:17 crc kubenswrapper[4687]: E0312 16:23:17.595164 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" Mar 12 16:23:17 crc kubenswrapper[4687]: I0312 16:23:17.679561 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c7b658b6f-wgtnl" podStartSLOduration=13.679537823 podStartE2EDuration="13.679537823s" podCreationTimestamp="2026-03-12 16:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:17.668195441 +0000 UTC m=+1246.632157785" watchObservedRunningTime="2026-03-12 16:23:17.679537823 +0000 UTC m=+1246.643500167" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.011040 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz"] Mar 12 16:23:18 crc kubenswrapper[4687]: W0312 16:23:18.033485 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1437be_06a2_43c0_9ae4_6be8a822e466.slice/crio-4e787ae3f492e646ddbeac85905ef8833f3e87cf1e29dec7fc75e4d524fa368f WatchSource:0}: Error finding container 4e787ae3f492e646ddbeac85905ef8833f3e87cf1e29dec7fc75e4d524fa368f: Status 404 returned error can't find the container with id 4e787ae3f492e646ddbeac85905ef8833f3e87cf1e29dec7fc75e4d524fa368f Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.037609 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9x5hb"] Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.045015 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:23:18 crc kubenswrapper[4687]: W0312 16:23:18.058514 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb24db29f_d8ed_49ff_8f32_612345208003.slice/crio-cf8191c1c5560e51450690ffc258d35d50958154bf50c049fa9ca0044ef438f7 WatchSource:0}: Error finding container cf8191c1c5560e51450690ffc258d35d50958154bf50c049fa9ca0044ef438f7: Status 404 returned error can't find the container with id cf8191c1c5560e51450690ffc258d35d50958154bf50c049fa9ca0044ef438f7 Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.233776 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.370398 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.407460 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.414251 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.543534 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmmb8\" (UniqueName: \"kubernetes.io/projected/bac824fc-109e-4d83-8124-9bd315b2bbac-kube-api-access-dmmb8\") pod \"bac824fc-109e-4d83-8124-9bd315b2bbac\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.543659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-dns-svc\") pod \"05bc5887-8df5-4969-9874-1a2ac5c761a0\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.543686 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqgl5\" (UniqueName: \"kubernetes.io/projected/05bc5887-8df5-4969-9874-1a2ac5c761a0-kube-api-access-mqgl5\") pod \"05bc5887-8df5-4969-9874-1a2ac5c761a0\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.543909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-config\") pod \"05bc5887-8df5-4969-9874-1a2ac5c761a0\" (UID: \"05bc5887-8df5-4969-9874-1a2ac5c761a0\") " Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.544097 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac824fc-109e-4d83-8124-9bd315b2bbac-config\") pod \"bac824fc-109e-4d83-8124-9bd315b2bbac\" (UID: \"bac824fc-109e-4d83-8124-9bd315b2bbac\") " Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.544331 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05bc5887-8df5-4969-9874-1a2ac5c761a0" (UID: "05bc5887-8df5-4969-9874-1a2ac5c761a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.544383 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-config" (OuterVolumeSpecName: "config") pod "05bc5887-8df5-4969-9874-1a2ac5c761a0" (UID: "05bc5887-8df5-4969-9874-1a2ac5c761a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.544796 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.544839 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05bc5887-8df5-4969-9874-1a2ac5c761a0-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.544893 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac824fc-109e-4d83-8124-9bd315b2bbac-config" (OuterVolumeSpecName: "config") pod "bac824fc-109e-4d83-8124-9bd315b2bbac" (UID: "bac824fc-109e-4d83-8124-9bd315b2bbac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.553605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac824fc-109e-4d83-8124-9bd315b2bbac-kube-api-access-dmmb8" (OuterVolumeSpecName: "kube-api-access-dmmb8") pod "bac824fc-109e-4d83-8124-9bd315b2bbac" (UID: "bac824fc-109e-4d83-8124-9bd315b2bbac"). InnerVolumeSpecName "kube-api-access-dmmb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.554871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc5887-8df5-4969-9874-1a2ac5c761a0-kube-api-access-mqgl5" (OuterVolumeSpecName: "kube-api-access-mqgl5") pod "05bc5887-8df5-4969-9874-1a2ac5c761a0" (UID: "05bc5887-8df5-4969-9874-1a2ac5c761a0"). InnerVolumeSpecName "kube-api-access-mqgl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.617713 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d83f3b6-81ea-457d-815e-22eab66d4058","Type":"ContainerStarted","Data":"cbb7ab54062a9c089c1ea7df44f36b511eb1f51b9b3ba64674dd3f4420154282"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.621341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2f2ec7e-fcd2-4749-9f00-ffe100081b84","Type":"ContainerStarted","Data":"98329df760d1d312bd74966048acddcec1f364a28ffb8c7913954fe0072f316d"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.623302 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerStarted","Data":"cf8191c1c5560e51450690ffc258d35d50958154bf50c049fa9ca0044ef438f7"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.625085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb" event={"ID":"7c43d5a9-eafe-4910-acf5-0502509982b3","Type":"ContainerStarted","Data":"9857a556e1dd110152583a8a342cd014587e6c4927ba9f9f9f578e6b05092fb5"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.626395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" event={"ID":"05bc5887-8df5-4969-9874-1a2ac5c761a0","Type":"ContainerDied","Data":"95fe240090525dee9126c62d2194a24984481e7cc0723933b382962014746f99"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.626508 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gkhhd" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.632648 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"4ee0a72c-d057-4042-8611-c509c4ed3edf","Type":"ContainerStarted","Data":"be93c8a792c8bdf44ed1a5742f64d7c5582812aaca1bff03b025d337a8592e0f"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.636283 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4e395399-5a85-445f-9b56-4036687b73b6","Type":"ContainerStarted","Data":"a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.639996 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.639932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pdnp7" event={"ID":"bac824fc-109e-4d83-8124-9bd315b2bbac","Type":"ContainerDied","Data":"c025e2c6d630f6ddb3cb66f3413ce7b9df3afc3f8655fdb448fc482412cd1917"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.652789 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac824fc-109e-4d83-8124-9bd315b2bbac-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.652832 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmmb8\" (UniqueName: \"kubernetes.io/projected/bac824fc-109e-4d83-8124-9bd315b2bbac-kube-api-access-dmmb8\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.652848 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqgl5\" (UniqueName: \"kubernetes.io/projected/05bc5887-8df5-4969-9874-1a2ac5c761a0-kube-api-access-mqgl5\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.654737 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" event={"ID":"3a1437be-06a2-43c0-9ae4-6be8a822e466","Type":"ContainerStarted","Data":"4e787ae3f492e646ddbeac85905ef8833f3e87cf1e29dec7fc75e4d524fa368f"} Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.745816 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gkhhd"] Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.767967 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gkhhd"] Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.796484 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdnp7"] Mar 12 16:23:18 crc kubenswrapper[4687]: I0312 16:23:18.805684 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pdnp7"] Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.241987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.431193 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jwnk7"] Mar 12 16:23:19 crc kubenswrapper[4687]: W0312 16:23:19.451878 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d69a73_06c7_48b5_9479_7816c969dafc.slice/crio-79ffe964d9d1e54ca4597fec8d2142b88e631c49bf32f3651d906048756439db WatchSource:0}: Error finding container 79ffe964d9d1e54ca4597fec8d2142b88e631c49bf32f3651d906048756439db: Status 404 returned error can't find the container with id 79ffe964d9d1e54ca4597fec8d2142b88e631c49bf32f3651d906048756439db Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.668969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92d9c0-0e63-4a78-bd77-a79cbc20449b","Type":"ContainerStarted","Data":"d84a0bf60a606af9f4c0694b023875d49c883c2d891ef2d4c181be9f505a3fac"} Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.670819 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"319f1aa4-c9b0-4424-aba2-3f8fa4c36257","Type":"ContainerStarted","Data":"20f3d4e3e15ab9ff815b7f2413a04a21de17794496455ebd19509f61387d1e13"} Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.677230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jwnk7" event={"ID":"24d69a73-06c7-48b5-9479-7816c969dafc","Type":"ContainerStarted","Data":"79ffe964d9d1e54ca4597fec8d2142b88e631c49bf32f3651d906048756439db"} Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.679261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ae78a70-0a58-4f39-977b-a4e5ce5b981d","Type":"ContainerStarted","Data":"539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac"} Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.746453 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05bc5887-8df5-4969-9874-1a2ac5c761a0" path="/var/lib/kubelet/pods/05bc5887-8df5-4969-9874-1a2ac5c761a0/volumes" Mar 12 16:23:19 crc kubenswrapper[4687]: I0312 16:23:19.746819 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac824fc-109e-4d83-8124-9bd315b2bbac" path="/var/lib/kubelet/pods/bac824fc-109e-4d83-8124-9bd315b2bbac/volumes" Mar 12 16:23:24 crc kubenswrapper[4687]: I0312 16:23:24.968462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:24 crc kubenswrapper[4687]: I0312 16:23:24.968921 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:24 crc kubenswrapper[4687]: I0312 16:23:24.974480 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:25 crc kubenswrapper[4687]: I0312 16:23:25.746455 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 16:23:25 crc kubenswrapper[4687]: I0312 16:23:25.746738 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e5aa64e0-72c4-4b44-8912-145bd488d369","Type":"ContainerStarted","Data":"4e4843f36b56b75f07bd569aaba2dcd1bc2c854a95f1870fe7c446a57466492e"} Mar 12 16:23:25 crc kubenswrapper[4687]: I0312 16:23:25.746788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 16:23:25 crc kubenswrapper[4687]: I0312 16:23:25.754461 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.884519436 podStartE2EDuration="25.754440118s" podCreationTimestamp="2026-03-12 16:23:00 +0000 UTC" firstStartedPulling="2026-03-12 16:23:16.592409955 +0000 UTC m=+1245.556372319" lastFinishedPulling="2026-03-12 16:23:19.462330657 +0000 UTC m=+1248.426293001" observedRunningTime="2026-03-12 16:23:25.754040087 +0000 UTC m=+1254.718002441" watchObservedRunningTime="2026-03-12 16:23:25.754440118 +0000 UTC m=+1254.718402482" Mar 12 16:23:25 crc kubenswrapper[4687]: I0312 16:23:25.840325 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6775b889d8-xvw66"] Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.744149 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b458feda-ef86-41d9-a1bc-3091254b086c","Type":"ContainerStarted","Data":"349a96d569511cce7e1aa8aebed25455e3777ae8ed981fc3b610167c0f18d1e7"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.744576 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.747399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb" event={"ID":"7c43d5a9-eafe-4910-acf5-0502509982b3","Type":"ContainerStarted","Data":"3877c56b6d1cffb0b2b81f7d07fa1ea2c054dad179388808b89213dc12f97e23"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.747641 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9x5hb" Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.748894 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92d9c0-0e63-4a78-bd77-a79cbc20449b","Type":"ContainerStarted","Data":"97f84d46a822621f3d2e2aed704079239cd3a8f9fc9f8d09600ea8ab161066ec"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.750144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"319f1aa4-c9b0-4424-aba2-3f8fa4c36257","Type":"ContainerStarted","Data":"70ee94f153522ab62ab9d730b8009f3118239b99b458fe9dc905e5ef5ac83b53"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.751813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" event={"ID":"3a1437be-06a2-43c0-9ae4-6be8a822e466","Type":"ContainerStarted","Data":"fcd419b23ae2ab8b88ab81beeeacf44294063df182292c98cee7e0da99401e13"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.754133 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a742c8fb-2af2-4192-bf5a-475f472b323a","Type":"ContainerStarted","Data":"ea67ce247125b477c7353354c53298ff68ed8cf1b05dc0bf39a94693d8daf66e"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.759207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2f2ec7e-fcd2-4749-9f00-ffe100081b84","Type":"ContainerStarted","Data":"4251d596ba5c84187ccfa70afc2b74827c05fdd40531515ae856c53a122fd3b4"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.764196 4687 generic.go:334] "Generic (PLEG): container finished" podID="24d69a73-06c7-48b5-9479-7816c969dafc" containerID="bb980c0e9ec43b5433f619725a2483e5c8dc2c184e9dae7942f9a124ad505620" exitCode=0 Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.765880 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jwnk7" event={"ID":"24d69a73-06c7-48b5-9479-7816c969dafc","Type":"ContainerDied","Data":"bb980c0e9ec43b5433f619725a2483e5c8dc2c184e9dae7942f9a124ad505620"} Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.772770 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.268900635 podStartE2EDuration="23.772751064s" podCreationTimestamp="2026-03-12 16:23:03 +0000 UTC" firstStartedPulling="2026-03-12 16:23:17.325598492 +0000 UTC m=+1246.289560836" lastFinishedPulling="2026-03-12 16:23:25.829448921 +0000 UTC m=+1254.793411265" observedRunningTime="2026-03-12 16:23:26.762407611 +0000 UTC m=+1255.726370025" watchObservedRunningTime="2026-03-12 16:23:26.772751064 +0000 UTC m=+1255.736713408" Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.810539 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9x5hb" podStartSLOduration=14.760550116 podStartE2EDuration="21.810525603s" podCreationTimestamp="2026-03-12 16:23:05 +0000 UTC" firstStartedPulling="2026-03-12 16:23:18.071191391 +0000 UTC m=+1247.035153735" lastFinishedPulling="2026-03-12 16:23:25.121166878 +0000 UTC m=+1254.085129222" observedRunningTime="2026-03-12 16:23:26.804457837 +0000 UTC m=+1255.768420181" watchObservedRunningTime="2026-03-12 16:23:26.810525603 +0000 UTC m=+1255.774487937" Mar 12 16:23:26 crc kubenswrapper[4687]: I0312 16:23:26.827419 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mtdkz" podStartSLOduration=15.23252845 podStartE2EDuration="22.827395507s" podCreationTimestamp="2026-03-12 16:23:04 +0000 UTC" firstStartedPulling="2026-03-12 16:23:18.036189609 +0000 UTC m=+1247.000151953" lastFinishedPulling="2026-03-12 16:23:25.631056666 +0000 UTC m=+1254.595019010" observedRunningTime="2026-03-12 16:23:26.816556209 +0000 UTC m=+1255.780518553" watchObservedRunningTime="2026-03-12 16:23:26.827395507 +0000 UTC m=+1255.791357851" Mar 12 16:23:27 crc kubenswrapper[4687]: I0312 16:23:27.776130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jwnk7" event={"ID":"24d69a73-06c7-48b5-9479-7816c969dafc","Type":"ContainerStarted","Data":"9d2e0e42488fec41c283e8f3ecb8b62fc25074df688483b8d945da05d6c002b7"} Mar 12 16:23:27 crc kubenswrapper[4687]: I0312 16:23:27.776442 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jwnk7" event={"ID":"24d69a73-06c7-48b5-9479-7816c969dafc","Type":"ContainerStarted","Data":"020a5ba153b9e0ec16393b4025cacc1519d0f554a4ef7210a63e757a66ca715d"} Mar 12 16:23:27 crc kubenswrapper[4687]: I0312 16:23:27.777437 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:27 crc kubenswrapper[4687]: I0312 16:23:27.777496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:23:27 crc kubenswrapper[4687]: I0312 16:23:27.801844 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jwnk7" podStartSLOduration=17.132081907 podStartE2EDuration="22.801825547s" podCreationTimestamp="2026-03-12 16:23:05 +0000 UTC" firstStartedPulling="2026-03-12 16:23:19.454207334 +0000 UTC m=+1248.418169678" lastFinishedPulling="2026-03-12 16:23:25.123950974 +0000 UTC m=+1254.087913318" observedRunningTime="2026-03-12 16:23:27.792636615 +0000 UTC m=+1256.756598959" watchObservedRunningTime="2026-03-12 16:23:27.801825547 +0000 UTC m=+1256.765787891" Mar 12 16:23:28 crc kubenswrapper[4687]: I0312 16:23:28.793996 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerStarted","Data":"07482ee2404f472217616b3bb74caeb560f705a9a937e22de589a6e9d3cd4c4a"} Mar 12 16:23:30 crc kubenswrapper[4687]: I0312 16:23:30.831213 4687 generic.go:334] "Generic (PLEG): container finished" podID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerID="ea67ce247125b477c7353354c53298ff68ed8cf1b05dc0bf39a94693d8daf66e" exitCode=0 Mar 12 16:23:30 crc kubenswrapper[4687]: I0312 16:23:30.831459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a742c8fb-2af2-4192-bf5a-475f472b323a","Type":"ContainerDied","Data":"ea67ce247125b477c7353354c53298ff68ed8cf1b05dc0bf39a94693d8daf66e"} Mar 12 16:23:30 crc kubenswrapper[4687]: I0312 16:23:30.834050 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerID="4251d596ba5c84187ccfa70afc2b74827c05fdd40531515ae856c53a122fd3b4" exitCode=0 Mar 12 16:23:30 crc kubenswrapper[4687]: I0312 16:23:30.834134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2f2ec7e-fcd2-4749-9f00-ffe100081b84","Type":"ContainerDied","Data":"4251d596ba5c84187ccfa70afc2b74827c05fdd40531515ae856c53a122fd3b4"} Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.342840 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.844990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92d9c0-0e63-4a78-bd77-a79cbc20449b","Type":"ContainerStarted","Data":"5609c190c3d58ecd4dc314f87ad4cddeedc160010cbf8833d28e587c64347853"} Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.847806 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"319f1aa4-c9b0-4424-aba2-3f8fa4c36257","Type":"ContainerStarted","Data":"c4b56d46b0c055f0f0b02be6096b86baa7c588a5c57574efb4cfdcb21fb87fc1"} Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.852023 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a742c8fb-2af2-4192-bf5a-475f472b323a","Type":"ContainerStarted","Data":"8ecd462191420eb461ff19f9a631dcae15360fcf3606ec918dab0aaa3792744e"} Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.853934 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2f2ec7e-fcd2-4749-9f00-ffe100081b84","Type":"ContainerStarted","Data":"4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338"} Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.855450 4687 generic.go:334] "Generic (PLEG): container finished" podID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerID="6e61c99d0eaa6063a2f61700e57430a9133ee51c15171c6532b8b173da0c103c" exitCode=0 Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.855486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" event={"ID":"052c84a6-4d8a-49bf-9afe-3586b8f55c5a","Type":"ContainerDied","Data":"6e61c99d0eaa6063a2f61700e57430a9133ee51c15171c6532b8b173da0c103c"} Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.873749 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.326533228 podStartE2EDuration="22.873730568s" podCreationTimestamp="2026-03-12 16:23:09 +0000 UTC" firstStartedPulling="2026-03-12 16:23:19.344809256 +0000 UTC m=+1248.308771620" lastFinishedPulling="2026-03-12 16:23:30.892006616 +0000 UTC m=+1259.855968960" observedRunningTime="2026-03-12 16:23:31.864814152 +0000 UTC m=+1260.828776506" watchObservedRunningTime="2026-03-12 16:23:31.873730568 +0000 UTC m=+1260.837692912" Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.900052 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.807023663 podStartE2EDuration="33.90003309s" podCreationTimestamp="2026-03-12 16:22:58 +0000 UTC" firstStartedPulling="2026-03-12 16:23:18.248071124 +0000 UTC m=+1247.212033468" lastFinishedPulling="2026-03-12 16:23:24.341080521 +0000 UTC m=+1253.305042895" observedRunningTime="2026-03-12 16:23:31.897226563 +0000 UTC m=+1260.861188927" watchObservedRunningTime="2026-03-12 16:23:31.90003309 +0000 UTC m=+1260.863995434" Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.924433 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.398045194 podStartE2EDuration="32.924412781s" podCreationTimestamp="2026-03-12 16:22:59 +0000 UTC" firstStartedPulling="2026-03-12 16:23:16.594906243 +0000 UTC m=+1245.558868587" lastFinishedPulling="2026-03-12 16:23:25.12127383 +0000 UTC m=+1254.085236174" observedRunningTime="2026-03-12 16:23:31.916968456 +0000 UTC m=+1260.880930800" watchObservedRunningTime="2026-03-12 16:23:31.924412781 +0000 UTC m=+1260.888375125" Mar 12 16:23:31 crc kubenswrapper[4687]: I0312 16:23:31.944168 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.356410708 podStartE2EDuration="22.944146593s" podCreationTimestamp="2026-03-12 16:23:09 +0000 UTC" firstStartedPulling="2026-03-12 16:23:19.31912712 +0000 UTC m=+1248.283089464" lastFinishedPulling="2026-03-12 16:23:30.906863005 +0000 UTC m=+1259.870825349" observedRunningTime="2026-03-12 16:23:31.93495518 +0000 UTC m=+1260.898917534" watchObservedRunningTime="2026-03-12 16:23:31.944146593 +0000 UTC m=+1260.908108937" Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.190926 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.225138 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.871266 4687 generic.go:334] "Generic (PLEG): container finished" podID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerID="cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f" exitCode=0 Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.871353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" event={"ID":"afcaaea3-4a38-4006-9cf6-b2a5f92122c2","Type":"ContainerDied","Data":"cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f"} Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.874336 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" event={"ID":"052c84a6-4d8a-49bf-9afe-3586b8f55c5a","Type":"ContainerStarted","Data":"f737336255690d3c2d2c7f384eca8640b4dfd0e288834eae9ff5ed0ce25083d3"} Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.874751 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.932685 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" podStartSLOduration=3.773761586 podStartE2EDuration="36.932667621s" podCreationTimestamp="2026-03-12 16:22:56 +0000 UTC" firstStartedPulling="2026-03-12 16:22:57.73304971 +0000 UTC m=+1226.697012054" lastFinishedPulling="2026-03-12 16:23:30.891955735 +0000 UTC m=+1259.855918089" observedRunningTime="2026-03-12 16:23:32.925114944 +0000 UTC m=+1261.889077288" watchObservedRunningTime="2026-03-12 16:23:32.932667621 +0000 UTC m=+1261.896629965" Mar 12 16:23:32 crc kubenswrapper[4687]: I0312 16:23:32.943845 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.190872 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99lwk"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.220794 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ct4m2"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.222678 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.224189 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.235829 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ct4m2"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.272209 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.272306 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.272511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xx7n\" (UniqueName: \"kubernetes.io/projected/ce59e430-0056-4fc3-88a0-333516c500a4-kube-api-access-5xx7n\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.272555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-config\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.279540 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-42bmb"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.280926 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.284657 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.286233 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-42bmb"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.374753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.374862 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-ovn-rundir\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.374891 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-ovs-rundir\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.374926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjnm\" (UniqueName: \"kubernetes.io/projected/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-kube-api-access-qwjnm\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.374991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xx7n\" (UniqueName: \"kubernetes.io/projected/ce59e430-0056-4fc3-88a0-333516c500a4-kube-api-access-5xx7n\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.375025 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-config\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.375068 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.375104 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-config\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.375176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-combined-ca-bundle\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.375223 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.375730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.376545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.376715 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-config\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.393799 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xx7n\" (UniqueName: \"kubernetes.io/projected/ce59e430-0056-4fc3-88a0-333516c500a4-kube-api-access-5xx7n\") pod \"dnsmasq-dns-7f896c8c65-ct4m2\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479046 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-config\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479157 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-combined-ca-bundle\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-ovn-rundir\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwjnm\" (UniqueName: \"kubernetes.io/projected/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-kube-api-access-qwjnm\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479320 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-ovs-rundir\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.479683 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-ovs-rundir\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.480282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-ovn-rundir\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.481112 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-config\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.483104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.483479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-combined-ca-bundle\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.505595 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwjnm\" (UniqueName: \"kubernetes.io/projected/12396dd7-4cb8-4e1d-87d5-e03d09c6a01e-kube-api-access-qwjnm\") pod \"ovn-controller-metrics-42bmb\" (UID: \"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e\") " pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.537742 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.598318 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-42bmb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.684880 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-24427"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.689970 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.748838 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4gdg"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.750446 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.753200 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.764930 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4gdg"] Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.785494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.785560 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjs67\" (UniqueName: \"kubernetes.io/projected/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-kube-api-access-gjs67\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.785672 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-dns-svc\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.785825 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-config\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.785905 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.890132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-config\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.890483 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.890529 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.890554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjs67\" (UniqueName: \"kubernetes.io/projected/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-kube-api-access-gjs67\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.890619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-dns-svc\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.891662 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-dns-svc\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.891732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.892197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.892300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-config\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.895900 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerName="dnsmasq-dns" containerID="cri-o://f737336255690d3c2d2c7f384eca8640b4dfd0e288834eae9ff5ed0ce25083d3" gracePeriod=10 Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.896496 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:23:33 crc kubenswrapper[4687]: I0312 16:23:33.918813 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjs67\" (UniqueName: \"kubernetes.io/projected/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-kube-api-access-gjs67\") pod \"dnsmasq-dns-698758b865-x4gdg\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.077923 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.117373 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ct4m2"] Mar 12 16:23:34 crc kubenswrapper[4687]: W0312 16:23:34.128042 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce59e430_0056_4fc3_88a0_333516c500a4.slice/crio-d0ee580e84ffd81c823c586af65eea887f496d903dd9ddcf4460c908abf56253 WatchSource:0}: Error finding container d0ee580e84ffd81c823c586af65eea887f496d903dd9ddcf4460c908abf56253: Status 404 returned error can't find the container with id d0ee580e84ffd81c823c586af65eea887f496d903dd9ddcf4460c908abf56253 Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.390017 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-42bmb"] Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.655631 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4gdg"] Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.730637 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.793122 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.877919 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.884212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.887133 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.887442 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.887757 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pvm4v" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.887944 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.908709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4gdg" event={"ID":"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff","Type":"ContainerStarted","Data":"77f28074beb1d8f1836e9ec6616b4de08eb8a61025239bd32dfbbcd32f3248ea"} Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.910522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-42bmb" event={"ID":"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e","Type":"ContainerStarted","Data":"7f5a898d572108bad5506807e05868da4802c110c901c915f42d1cb408c1165d"} Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.914168 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" event={"ID":"ce59e430-0056-4fc3-88a0-333516c500a4","Type":"ContainerStarted","Data":"d0ee580e84ffd81c823c586af65eea887f496d903dd9ddcf4460c908abf56253"} Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.930192 4687 generic.go:334] "Generic (PLEG): container finished" podID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerID="f737336255690d3c2d2c7f384eca8640b4dfd0e288834eae9ff5ed0ce25083d3" exitCode=0 Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.930277 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" event={"ID":"052c84a6-4d8a-49bf-9afe-3586b8f55c5a","Type":"ContainerDied","Data":"f737336255690d3c2d2c7f384eca8640b4dfd0e288834eae9ff5ed0ce25083d3"} Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.930510 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.932100 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 16:23:34 crc kubenswrapper[4687]: I0312 16:23:34.980839 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.016743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97466c9b-724b-4349-8745-8803b025261a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.017014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6wd\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-kube-api-access-mh6wd\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.017109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/97466c9b-724b-4349-8745-8803b025261a-lock\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.017282 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.017492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.017846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/97466c9b-724b-4349-8745-8803b025261a-cache\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.125759 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97466c9b-724b-4349-8745-8803b025261a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.125820 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6wd\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-kube-api-access-mh6wd\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.125858 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/97466c9b-724b-4349-8745-8803b025261a-lock\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.125888 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.125961 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.126185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/97466c9b-724b-4349-8745-8803b025261a-cache\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.126746 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/97466c9b-724b-4349-8745-8803b025261a-cache\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.126875 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.126900 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.126949 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift podName:97466c9b-724b-4349-8745-8803b025261a nodeName:}" failed. No retries permitted until 2026-03-12 16:23:35.626927598 +0000 UTC m=+1264.590890022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift") pod "swift-storage-0" (UID: "97466c9b-724b-4349-8745-8803b025261a") : configmap "swift-ring-files" not found Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.129075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/97466c9b-724b-4349-8745-8803b025261a-lock\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.132209 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97466c9b-724b-4349-8745-8803b025261a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.137956 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.137982 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d102823750190f83a3f3483642890c31cdd40e8781d5435ea1ae50612d5ca22/globalmount\"" pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.144664 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6wd\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-kube-api-access-mh6wd\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.234703 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.237049 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.240858 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.241162 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.241326 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p4h96" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.242015 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.248238 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.275827 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40fd9f96-1b45-4791-ad96-66f6f30cbc4b\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.332835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d359c7e-5a3d-430f-85c4-89dea1de02d7-scripts\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.332941 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d359c7e-5a3d-430f-85c4-89dea1de02d7-config\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.333023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.333040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks95f\" (UniqueName: \"kubernetes.io/projected/2d359c7e-5a3d-430f-85c4-89dea1de02d7-kube-api-access-ks95f\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.333066 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d359c7e-5a3d-430f-85c4-89dea1de02d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.333086 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.333124 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d359c7e-5a3d-430f-85c4-89dea1de02d7-scripts\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435412 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d359c7e-5a3d-430f-85c4-89dea1de02d7-config\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435470 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435490 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks95f\" (UniqueName: \"kubernetes.io/projected/2d359c7e-5a3d-430f-85c4-89dea1de02d7-kube-api-access-ks95f\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435512 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d359c7e-5a3d-430f-85c4-89dea1de02d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.435562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.436506 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d359c7e-5a3d-430f-85c4-89dea1de02d7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.436938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d359c7e-5a3d-430f-85c4-89dea1de02d7-config\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.437009 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d359c7e-5a3d-430f-85c4-89dea1de02d7-scripts\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.439940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.439976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.440186 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d359c7e-5a3d-430f-85c4-89dea1de02d7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.454134 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks95f\" (UniqueName: \"kubernetes.io/projected/2d359c7e-5a3d-430f-85c4-89dea1de02d7-kube-api-access-ks95f\") pod \"ovn-northd-0\" (UID: \"2d359c7e-5a3d-430f-85c4-89dea1de02d7\") " pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.597891 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.638590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.638735 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.638950 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.639004 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift podName:97466c9b-724b-4349-8745-8803b025261a nodeName:}" failed. No retries permitted until 2026-03-12 16:23:36.638986047 +0000 UTC m=+1265.602948391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift") pod "swift-storage-0" (UID: "97466c9b-724b-4349-8745-8803b025261a") : configmap "swift-ring-files" not found Mar 12 16:23:35 crc kubenswrapper[4687]: E0312 16:23:35.875379 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb24db29f_d8ed_49ff_8f32_612345208003.slice/crio-conmon-07482ee2404f472217616b3bb74caeb560f705a9a937e22de589a6e9d3cd4c4a.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.942695 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" event={"ID":"ce59e430-0056-4fc3-88a0-333516c500a4","Type":"ContainerStarted","Data":"ca9ade217ecbd8e8f0bba24da37ea35a76f654700201de0018117c1c62f31ad5"} Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.944123 4687 generic.go:334] "Generic (PLEG): container finished" podID="b24db29f-d8ed-49ff-8f32-612345208003" containerID="07482ee2404f472217616b3bb74caeb560f705a9a937e22de589a6e9d3cd4c4a" exitCode=0 Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.944162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerDied","Data":"07482ee2404f472217616b3bb74caeb560f705a9a937e22de589a6e9d3cd4c4a"} Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.957940 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerName="dnsmasq-dns" containerID="cri-o://5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4" gracePeriod=10 Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.958272 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" event={"ID":"afcaaea3-4a38-4006-9cf6-b2a5f92122c2","Type":"ContainerStarted","Data":"5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4"} Mar 12 16:23:35 crc kubenswrapper[4687]: I0312 16:23:35.958326 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.045780 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" podStartSLOduration=-9223371997.809011 podStartE2EDuration="39.0457635s" podCreationTimestamp="2026-03-12 16:22:57 +0000 UTC" firstStartedPulling="2026-03-12 16:22:58.023564257 +0000 UTC m=+1226.987526601" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:36.031404365 +0000 UTC m=+1264.995366709" watchObservedRunningTime="2026-03-12 16:23:36.0457635 +0000 UTC m=+1265.009725844" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.148526 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.590482 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.595719 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.675531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnjtw\" (UniqueName: \"kubernetes.io/projected/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-kube-api-access-bnjtw\") pod \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.675575 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-dns-svc\") pod \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.675672 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-config\") pod \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\" (UID: \"052c84a6-4d8a-49bf-9afe-3586b8f55c5a\") " Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.675716 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4lv\" (UniqueName: \"kubernetes.io/projected/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-kube-api-access-nt4lv\") pod \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.675793 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-config\") pod \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.675928 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-dns-svc\") pod \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\" (UID: \"afcaaea3-4a38-4006-9cf6-b2a5f92122c2\") " Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.676206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:36 crc kubenswrapper[4687]: E0312 16:23:36.676499 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 16:23:36 crc kubenswrapper[4687]: E0312 16:23:36.676513 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 16:23:36 crc kubenswrapper[4687]: E0312 16:23:36.676552 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift podName:97466c9b-724b-4349-8745-8803b025261a nodeName:}" failed. No retries permitted until 2026-03-12 16:23:38.676538652 +0000 UTC m=+1267.640500996 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift") pod "swift-storage-0" (UID: "97466c9b-724b-4349-8745-8803b025261a") : configmap "swift-ring-files" not found Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.681212 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-kube-api-access-nt4lv" (OuterVolumeSpecName: "kube-api-access-nt4lv") pod "afcaaea3-4a38-4006-9cf6-b2a5f92122c2" (UID: "afcaaea3-4a38-4006-9cf6-b2a5f92122c2"). InnerVolumeSpecName "kube-api-access-nt4lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.682558 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-kube-api-access-bnjtw" (OuterVolumeSpecName: "kube-api-access-bnjtw") pod "052c84a6-4d8a-49bf-9afe-3586b8f55c5a" (UID: "052c84a6-4d8a-49bf-9afe-3586b8f55c5a"). InnerVolumeSpecName "kube-api-access-bnjtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.721591 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-config" (OuterVolumeSpecName: "config") pod "052c84a6-4d8a-49bf-9afe-3586b8f55c5a" (UID: "052c84a6-4d8a-49bf-9afe-3586b8f55c5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.722387 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afcaaea3-4a38-4006-9cf6-b2a5f92122c2" (UID: "afcaaea3-4a38-4006-9cf6-b2a5f92122c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.731955 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-config" (OuterVolumeSpecName: "config") pod "afcaaea3-4a38-4006-9cf6-b2a5f92122c2" (UID: "afcaaea3-4a38-4006-9cf6-b2a5f92122c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.733486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "052c84a6-4d8a-49bf-9afe-3586b8f55c5a" (UID: "052c84a6-4d8a-49bf-9afe-3586b8f55c5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.779334 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnjtw\" (UniqueName: \"kubernetes.io/projected/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-kube-api-access-bnjtw\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.779536 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.779593 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052c84a6-4d8a-49bf-9afe-3586b8f55c5a-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.779644 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4lv\" (UniqueName: \"kubernetes.io/projected/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-kube-api-access-nt4lv\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.779722 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.779773 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afcaaea3-4a38-4006-9cf6-b2a5f92122c2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.986143 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.986163 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-24427" event={"ID":"052c84a6-4d8a-49bf-9afe-3586b8f55c5a","Type":"ContainerDied","Data":"9848851d2640beab53e3cbf6a03217d611098011a6a282eaeefd1e96466cee63"} Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.986235 4687 scope.go:117] "RemoveContainer" containerID="f737336255690d3c2d2c7f384eca8640b4dfd0e288834eae9ff5ed0ce25083d3" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.997746 4687 generic.go:334] "Generic (PLEG): container finished" podID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerID="5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4" exitCode=0 Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.997818 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.997838 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" event={"ID":"afcaaea3-4a38-4006-9cf6-b2a5f92122c2","Type":"ContainerDied","Data":"5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4"} Mar 12 16:23:36 crc kubenswrapper[4687]: I0312 16:23:36.998176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99lwk" event={"ID":"afcaaea3-4a38-4006-9cf6-b2a5f92122c2","Type":"ContainerDied","Data":"5bd95ba8b814645d764a68fcfa1dd97cb71e720c2b76400f33f982b2f2db19ab"} Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:36.999986 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2d359c7e-5a3d-430f-85c4-89dea1de02d7","Type":"ContainerStarted","Data":"8f44b679499ffee765b4ed618345a0ba07d73a44f28b9dad0f648c496f2986c5"} Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.001708 4687 generic.go:334] "Generic (PLEG): container finished" podID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerID="bd090d29e2c1fe9ea63ba12382a0aba8139e55f793ca5ae995c80ad5ce241cec" exitCode=0 Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.001773 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4gdg" event={"ID":"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff","Type":"ContainerDied","Data":"bd090d29e2c1fe9ea63ba12382a0aba8139e55f793ca5ae995c80ad5ce241cec"} Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.012901 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-42bmb" event={"ID":"12396dd7-4cb8-4e1d-87d5-e03d09c6a01e","Type":"ContainerStarted","Data":"592f7f12d782f738f3bf3c47431c9cab555801333b982dc5b2f9370ecabfe9cf"} Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.016239 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce59e430-0056-4fc3-88a0-333516c500a4" containerID="ca9ade217ecbd8e8f0bba24da37ea35a76f654700201de0018117c1c62f31ad5" exitCode=0 Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.016413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" event={"ID":"ce59e430-0056-4fc3-88a0-333516c500a4","Type":"ContainerDied","Data":"ca9ade217ecbd8e8f0bba24da37ea35a76f654700201de0018117c1c62f31ad5"} Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.025007 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-24427"] Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.039807 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-24427"] Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.084080 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-42bmb" podStartSLOduration=4.084053056 podStartE2EDuration="4.084053056s" podCreationTimestamp="2026-03-12 16:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:37.073786394 +0000 UTC m=+1266.037748738" watchObservedRunningTime="2026-03-12 16:23:37.084053056 +0000 UTC m=+1266.048015430" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.134854 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99lwk"] Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.150048 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99lwk"] Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.159154 4687 scope.go:117] "RemoveContainer" containerID="6e61c99d0eaa6063a2f61700e57430a9133ee51c15171c6532b8b173da0c103c" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.228495 4687 scope.go:117] "RemoveContainer" containerID="5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.491518 4687 scope.go:117] "RemoveContainer" containerID="cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f" Mar 12 16:23:37 crc kubenswrapper[4687]: E0312 16:23:37.528158 4687 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 12 16:23:37 crc kubenswrapper[4687]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/ce59e430-0056-4fc3-88a0-333516c500a4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 12 16:23:37 crc kubenswrapper[4687]: > podSandboxID="d0ee580e84ffd81c823c586af65eea887f496d903dd9ddcf4460c908abf56253" Mar 12 16:23:37 crc kubenswrapper[4687]: E0312 16:23:37.528426 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 16:23:37 crc kubenswrapper[4687]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h5c8h5fbh5ddh5c5h666hbch5f5h66fh68fh87hfdh699h84hcdh589h64dh76h9ch5cfh5f8h56bh89h67h5fh56bhf6h654h556hch9dh657q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xx7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f896c8c65-ct4m2_openstack(ce59e430-0056-4fc3-88a0-333516c500a4): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/ce59e430-0056-4fc3-88a0-333516c500a4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 12 16:23:37 crc kubenswrapper[4687]: > logger="UnhandledError" Mar 12 16:23:37 crc kubenswrapper[4687]: E0312 16:23:37.529604 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/ce59e430-0056-4fc3-88a0-333516c500a4/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.540497 4687 scope.go:117] "RemoveContainer" containerID="5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4" Mar 12 16:23:37 crc kubenswrapper[4687]: E0312 16:23:37.541029 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4\": container with ID starting with 5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4 not found: ID does not exist" containerID="5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.541077 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4"} err="failed to get container status \"5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4\": rpc error: code = NotFound desc = could not find container \"5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4\": container with ID starting with 5eb813e5053bc7522f4073dea49fe99bd3f6e4a4d45eac1e3a59e0f868ba3da4 not found: ID does not exist" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.541111 4687 scope.go:117] "RemoveContainer" containerID="cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f" Mar 12 16:23:37 crc kubenswrapper[4687]: E0312 16:23:37.541543 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f\": container with ID starting with cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f not found: ID does not exist" containerID="cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.541603 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f"} err="failed to get container status \"cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f\": rpc error: code = NotFound desc = could not find container \"cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f\": container with ID starting with cc19aafe6fd7095f4acc643fb64f26af9bef370d51ccaa03dfb4e2e3482ee92f not found: ID does not exist" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.753525 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" path="/var/lib/kubelet/pods/052c84a6-4d8a-49bf-9afe-3586b8f55c5a/volumes" Mar 12 16:23:37 crc kubenswrapper[4687]: I0312 16:23:37.755290 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" path="/var/lib/kubelet/pods/afcaaea3-4a38-4006-9cf6-b2a5f92122c2/volumes" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.032885 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2d359c7e-5a3d-430f-85c4-89dea1de02d7","Type":"ContainerStarted","Data":"ae557e017b141a69a16d58adf18a15206deaa64ef23de14586ec97b1954ae75b"} Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.032927 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2d359c7e-5a3d-430f-85c4-89dea1de02d7","Type":"ContainerStarted","Data":"5ffdc8fdde5411d25bc8ef533e160d9ad7149141d457494380b76b3b8ee54775"} Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.034790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4gdg" event={"ID":"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff","Type":"ContainerStarted","Data":"44ce4690d8d5fd4a7afe339a5d7560e0231fff0e7f3583dac6063657f6f47fcb"} Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.063462 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-x4gdg" podStartSLOduration=5.063442172 podStartE2EDuration="5.063442172s" podCreationTimestamp="2026-03-12 16:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:38.052562074 +0000 UTC m=+1267.016524428" watchObservedRunningTime="2026-03-12 16:23:38.063442172 +0000 UTC m=+1267.027404516" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.728390 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.729145 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.729176 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.729232 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift podName:97466c9b-724b-4349-8745-8803b025261a nodeName:}" failed. No retries permitted until 2026-03-12 16:23:42.729215927 +0000 UTC m=+1271.693178271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift") pod "swift-storage-0" (UID: "97466c9b-724b-4349-8745-8803b025261a") : configmap "swift-ring-files" not found Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.822080 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hm5rd"] Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.822729 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerName="dnsmasq-dns" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.822752 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerName="dnsmasq-dns" Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.822779 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerName="init" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.822789 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerName="init" Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.822813 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerName="dnsmasq-dns" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.822846 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerName="dnsmasq-dns" Mar 12 16:23:38 crc kubenswrapper[4687]: E0312 16:23:38.822862 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerName="init" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.822870 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerName="init" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.823097 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcaaea3-4a38-4006-9cf6-b2a5f92122c2" containerName="dnsmasq-dns" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.823118 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="052c84a6-4d8a-49bf-9afe-3586b8f55c5a" containerName="dnsmasq-dns" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.824017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.826747 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.827008 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.836175 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.849930 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hm5rd"] Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.933570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-swiftconf\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.933660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-ring-data-devices\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.933761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1b1f68f-7bdc-4437-922c-d0abc47c639c-etc-swift\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.933832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5th4\" (UniqueName: \"kubernetes.io/projected/b1b1f68f-7bdc-4437-922c-d0abc47c639c-kube-api-access-g5th4\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.933883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-combined-ca-bundle\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.934436 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-dispersionconf\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:38 crc kubenswrapper[4687]: I0312 16:23:38.934575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-scripts\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-dispersionconf\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036313 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-scripts\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-swiftconf\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036475 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-ring-data-devices\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1b1f68f-7bdc-4437-922c-d0abc47c639c-etc-swift\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036564 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5th4\" (UniqueName: \"kubernetes.io/projected/b1b1f68f-7bdc-4437-922c-d0abc47c639c-kube-api-access-g5th4\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.036607 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-combined-ca-bundle\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.037713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-scripts\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.038102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1b1f68f-7bdc-4437-922c-d0abc47c639c-etc-swift\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.038119 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-ring-data-devices\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.042635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-swiftconf\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.043174 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-dispersionconf\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.055283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-combined-ca-bundle\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.055764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" event={"ID":"ce59e430-0056-4fc3-88a0-333516c500a4","Type":"ContainerStarted","Data":"bf5c92d5efe1587e1b2d5609a72ff15e6ac24570499aa5a470475220f73dd7c0"} Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.056183 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5th4\" (UniqueName: \"kubernetes.io/projected/b1b1f68f-7bdc-4437-922c-d0abc47c639c-kube-api-access-g5th4\") pod \"swift-ring-rebalance-hm5rd\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.056462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.056494 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.076335 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.640110123 podStartE2EDuration="4.07631829s" podCreationTimestamp="2026-03-12 16:23:35 +0000 UTC" firstStartedPulling="2026-03-12 16:23:36.112513625 +0000 UTC m=+1265.076475959" lastFinishedPulling="2026-03-12 16:23:37.548721772 +0000 UTC m=+1266.512684126" observedRunningTime="2026-03-12 16:23:39.074740517 +0000 UTC m=+1268.038702871" watchObservedRunningTime="2026-03-12 16:23:39.07631829 +0000 UTC m=+1268.040280634" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.099868 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" podStartSLOduration=6.099848057 podStartE2EDuration="6.099848057s" podCreationTimestamp="2026-03-12 16:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:39.095287101 +0000 UTC m=+1268.059249445" watchObservedRunningTime="2026-03-12 16:23:39.099848057 +0000 UTC m=+1268.063810401" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.145767 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.608413 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hm5rd"] Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.823657 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.823724 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 16:23:39 crc kubenswrapper[4687]: I0312 16:23:39.919757 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 16:23:40 crc kubenswrapper[4687]: I0312 16:23:40.067371 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hm5rd" event={"ID":"b1b1f68f-7bdc-4437-922c-d0abc47c639c","Type":"ContainerStarted","Data":"7d28f9b262988647ffa9c99af0edd1271b8545dd5c1079c89b92ba239b3b28ee"} Mar 12 16:23:40 crc kubenswrapper[4687]: I0312 16:23:40.163986 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.063245 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.063280 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.177236 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.273882 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.629511 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3eed-account-create-update-9lc6t"] Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.632599 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.640937 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.643387 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3eed-account-create-update-9lc6t"] Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.680502 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mx5f2"] Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.682204 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.694470 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mx5f2"] Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.805906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a6e134-b880-4c0b-af7a-f14d5ecaca30-operator-scripts\") pod \"glance-db-create-mx5f2\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.805960 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxp7\" (UniqueName: \"kubernetes.io/projected/59a6e134-b880-4c0b-af7a-f14d5ecaca30-kube-api-access-cvxp7\") pod \"glance-db-create-mx5f2\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.805986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n5p\" (UniqueName: \"kubernetes.io/projected/d7869f58-eb4c-49a5-addf-c157afcb109b-kube-api-access-n7n5p\") pod \"glance-3eed-account-create-update-9lc6t\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.806482 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7869f58-eb4c-49a5-addf-c157afcb109b-operator-scripts\") pod \"glance-3eed-account-create-update-9lc6t\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.908670 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n5p\" (UniqueName: \"kubernetes.io/projected/d7869f58-eb4c-49a5-addf-c157afcb109b-kube-api-access-n7n5p\") pod \"glance-3eed-account-create-update-9lc6t\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.908872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7869f58-eb4c-49a5-addf-c157afcb109b-operator-scripts\") pod \"glance-3eed-account-create-update-9lc6t\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.908976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a6e134-b880-4c0b-af7a-f14d5ecaca30-operator-scripts\") pod \"glance-db-create-mx5f2\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.909005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxp7\" (UniqueName: \"kubernetes.io/projected/59a6e134-b880-4c0b-af7a-f14d5ecaca30-kube-api-access-cvxp7\") pod \"glance-db-create-mx5f2\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.910516 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a6e134-b880-4c0b-af7a-f14d5ecaca30-operator-scripts\") pod \"glance-db-create-mx5f2\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.910930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7869f58-eb4c-49a5-addf-c157afcb109b-operator-scripts\") pod \"glance-3eed-account-create-update-9lc6t\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.927714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n5p\" (UniqueName: \"kubernetes.io/projected/d7869f58-eb4c-49a5-addf-c157afcb109b-kube-api-access-n7n5p\") pod \"glance-3eed-account-create-update-9lc6t\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.928125 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxp7\" (UniqueName: \"kubernetes.io/projected/59a6e134-b880-4c0b-af7a-f14d5ecaca30-kube-api-access-cvxp7\") pod \"glance-db-create-mx5f2\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:41 crc kubenswrapper[4687]: I0312 16:23:41.981691 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.017054 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.353510 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vv8qj"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.355153 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.374674 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dc0-account-create-update-mwgwc"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.375931 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.378415 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.409961 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vv8qj"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.429444 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc0-account-create-update-mwgwc"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.524627 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5z7\" (UniqueName: \"kubernetes.io/projected/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-kube-api-access-zw5z7\") pod \"keystone-db-create-vv8qj\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.524682 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1799451-d37d-4851-93e4-cb394f3d1739-operator-scripts\") pod \"keystone-5dc0-account-create-update-mwgwc\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.524729 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6wp\" (UniqueName: \"kubernetes.io/projected/a1799451-d37d-4851-93e4-cb394f3d1739-kube-api-access-hc6wp\") pod \"keystone-5dc0-account-create-update-mwgwc\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.524761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-operator-scripts\") pod \"keystone-db-create-vv8qj\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.558431 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-d8vg2"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.563295 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.587087 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d5f9-account-create-update-4s62g"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.588836 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.591847 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.626057 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6wp\" (UniqueName: \"kubernetes.io/projected/a1799451-d37d-4851-93e4-cb394f3d1739-kube-api-access-hc6wp\") pod \"keystone-5dc0-account-create-update-mwgwc\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.626130 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-operator-scripts\") pod \"keystone-db-create-vv8qj\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.626202 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d8vg2"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.626311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5z7\" (UniqueName: \"kubernetes.io/projected/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-kube-api-access-zw5z7\") pod \"keystone-db-create-vv8qj\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.626372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1799451-d37d-4851-93e4-cb394f3d1739-operator-scripts\") pod \"keystone-5dc0-account-create-update-mwgwc\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.627329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-operator-scripts\") pod \"keystone-db-create-vv8qj\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.627520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1799451-d37d-4851-93e4-cb394f3d1739-operator-scripts\") pod \"keystone-5dc0-account-create-update-mwgwc\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.641102 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d5f9-account-create-update-4s62g"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.653673 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6wp\" (UniqueName: \"kubernetes.io/projected/a1799451-d37d-4851-93e4-cb394f3d1739-kube-api-access-hc6wp\") pod \"keystone-5dc0-account-create-update-mwgwc\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.654370 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5z7\" (UniqueName: \"kubernetes.io/projected/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-kube-api-access-zw5z7\") pod \"keystone-db-create-vv8qj\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.672765 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.728683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c132350-8fd3-41bb-90a9-cee0bd08112f-operator-scripts\") pod \"placement-d5f9-account-create-update-4s62g\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.728786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39486168-4d75-4418-a2d8-576c03bb7743-operator-scripts\") pod \"placement-db-create-d8vg2\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.728814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rs6\" (UniqueName: \"kubernetes.io/projected/39486168-4d75-4418-a2d8-576c03bb7743-kube-api-access-k5rs6\") pod \"placement-db-create-d8vg2\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.728842 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j84h\" (UniqueName: \"kubernetes.io/projected/7c132350-8fd3-41bb-90a9-cee0bd08112f-kube-api-access-5j84h\") pod \"placement-d5f9-account-create-update-4s62g\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.830515 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c132350-8fd3-41bb-90a9-cee0bd08112f-operator-scripts\") pod \"placement-d5f9-account-create-update-4s62g\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.830890 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.830959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39486168-4d75-4418-a2d8-576c03bb7743-operator-scripts\") pod \"placement-db-create-d8vg2\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.831005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rs6\" (UniqueName: \"kubernetes.io/projected/39486168-4d75-4418-a2d8-576c03bb7743-kube-api-access-k5rs6\") pod \"placement-db-create-d8vg2\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.831045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j84h\" (UniqueName: \"kubernetes.io/projected/7c132350-8fd3-41bb-90a9-cee0bd08112f-kube-api-access-5j84h\") pod \"placement-d5f9-account-create-update-4s62g\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: E0312 16:23:42.831073 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 16:23:42 crc kubenswrapper[4687]: E0312 16:23:42.831094 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 16:23:42 crc kubenswrapper[4687]: E0312 16:23:42.831135 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift podName:97466c9b-724b-4349-8745-8803b025261a nodeName:}" failed. No retries permitted until 2026-03-12 16:23:50.831117622 +0000 UTC m=+1279.795080066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift") pod "swift-storage-0" (UID: "97466c9b-724b-4349-8745-8803b025261a") : configmap "swift-ring-files" not found Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.831481 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c132350-8fd3-41bb-90a9-cee0bd08112f-operator-scripts\") pod \"placement-d5f9-account-create-update-4s62g\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.832346 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39486168-4d75-4418-a2d8-576c03bb7743-operator-scripts\") pod \"placement-db-create-d8vg2\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.849538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j84h\" (UniqueName: \"kubernetes.io/projected/7c132350-8fd3-41bb-90a9-cee0bd08112f-kube-api-access-5j84h\") pod \"placement-d5f9-account-create-update-4s62g\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.849910 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rs6\" (UniqueName: \"kubernetes.io/projected/39486168-4d75-4418-a2d8-576c03bb7743-kube-api-access-k5rs6\") pod \"placement-db-create-d8vg2\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.877620 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.913933 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.919117 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3eed-account-create-update-9lc6t"] Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.928473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:42 crc kubenswrapper[4687]: I0312 16:23:42.928712 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mx5f2"] Mar 12 16:23:42 crc kubenswrapper[4687]: W0312 16:23:42.934070 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7869f58_eb4c_49a5_addf_c157afcb109b.slice/crio-7bab96087f3389f4828012a8a3aa6b2b4f0d7b3b00f5de2c14dee62b31035029 WatchSource:0}: Error finding container 7bab96087f3389f4828012a8a3aa6b2b4f0d7b3b00f5de2c14dee62b31035029: Status 404 returned error can't find the container with id 7bab96087f3389f4828012a8a3aa6b2b4f0d7b3b00f5de2c14dee62b31035029 Mar 12 16:23:42 crc kubenswrapper[4687]: W0312 16:23:42.936384 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a6e134_b880_4c0b_af7a_f14d5ecaca30.slice/crio-9400d51fdc46d2378b1d569239a26c7c30f1936b7aa957030b0349f28a90b49f WatchSource:0}: Error finding container 9400d51fdc46d2378b1d569239a26c7c30f1936b7aa957030b0349f28a90b49f: Status 404 returned error can't find the container with id 9400d51fdc46d2378b1d569239a26c7c30f1936b7aa957030b0349f28a90b49f Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.115490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerStarted","Data":"87ded7cfd707791f2f0e8ce611c372b1044929a9f3296091708ee917be327222"} Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.116684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mx5f2" event={"ID":"59a6e134-b880-4c0b-af7a-f14d5ecaca30","Type":"ContainerStarted","Data":"9400d51fdc46d2378b1d569239a26c7c30f1936b7aa957030b0349f28a90b49f"} Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.118150 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3eed-account-create-update-9lc6t" event={"ID":"d7869f58-eb4c-49a5-addf-c157afcb109b","Type":"ContainerStarted","Data":"7bab96087f3389f4828012a8a3aa6b2b4f0d7b3b00f5de2c14dee62b31035029"} Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.184080 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vv8qj"] Mar 12 16:23:43 crc kubenswrapper[4687]: W0312 16:23:43.200802 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84221c3f_e3cb_4784_b4ed_fd5c00e9fc9b.slice/crio-f53a0b6979c74041a263dbd57b2ca4965dfb511b40ce061242596319e1c63396 WatchSource:0}: Error finding container f53a0b6979c74041a263dbd57b2ca4965dfb511b40ce061242596319e1c63396: Status 404 returned error can't find the container with id f53a0b6979c74041a263dbd57b2ca4965dfb511b40ce061242596319e1c63396 Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.439655 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d8vg2"] Mar 12 16:23:43 crc kubenswrapper[4687]: W0312 16:23:43.449691 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39486168_4d75_4418_a2d8_576c03bb7743.slice/crio-bafa75477f9b704716decede418adb64f5b0c8c2d5e41f36e10f9aee05d687c1 WatchSource:0}: Error finding container bafa75477f9b704716decede418adb64f5b0c8c2d5e41f36e10f9aee05d687c1: Status 404 returned error can't find the container with id bafa75477f9b704716decede418adb64f5b0c8c2d5e41f36e10f9aee05d687c1 Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.534729 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc0-account-create-update-mwgwc"] Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.538972 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.539794 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.548562 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d5f9-account-create-update-4s62g"] Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.687905 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7zgtv"] Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.701983 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.725463 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7zgtv"] Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.867744 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e11478c5-fa5a-4676-a297-ca9b2db901e6-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7zgtv\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.867873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7x4\" (UniqueName: \"kubernetes.io/projected/e11478c5-fa5a-4676-a297-ca9b2db901e6-kube-api-access-bs7x4\") pod \"mysqld-exporter-openstack-db-create-7zgtv\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.907912 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-9858-account-create-update-95r55"] Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.909260 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.916659 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.920814 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9858-account-create-update-95r55"] Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.969863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e11478c5-fa5a-4676-a297-ca9b2db901e6-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7zgtv\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.969942 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7x4\" (UniqueName: \"kubernetes.io/projected/e11478c5-fa5a-4676-a297-ca9b2db901e6-kube-api-access-bs7x4\") pod \"mysqld-exporter-openstack-db-create-7zgtv\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:43 crc kubenswrapper[4687]: I0312 16:23:43.970702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e11478c5-fa5a-4676-a297-ca9b2db901e6-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7zgtv\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.062225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7x4\" (UniqueName: \"kubernetes.io/projected/e11478c5-fa5a-4676-a297-ca9b2db901e6-kube-api-access-bs7x4\") pod \"mysqld-exporter-openstack-db-create-7zgtv\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.071833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckh9s\" (UniqueName: \"kubernetes.io/projected/d038a44f-695f-49b3-8878-f7bee07fd444-kube-api-access-ckh9s\") pod \"mysqld-exporter-9858-account-create-update-95r55\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.072059 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d038a44f-695f-49b3-8878-f7bee07fd444-operator-scripts\") pod \"mysqld-exporter-9858-account-create-update-95r55\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.080540 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.138578 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ct4m2"] Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.157658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8vg2" event={"ID":"39486168-4d75-4418-a2d8-576c03bb7743","Type":"ContainerStarted","Data":"34520867e9b3da785ebe8fdfd742448d9cd815348ccf59cb11f77414ece37e47"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.157706 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8vg2" event={"ID":"39486168-4d75-4418-a2d8-576c03bb7743","Type":"ContainerStarted","Data":"bafa75477f9b704716decede418adb64f5b0c8c2d5e41f36e10f9aee05d687c1"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.162234 4687 generic.go:334] "Generic (PLEG): container finished" podID="d7869f58-eb4c-49a5-addf-c157afcb109b" containerID="b768375832f3ff94be486b7e2e9563a7c9f17008795cd518b18c8260563de2a9" exitCode=0 Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.162335 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3eed-account-create-update-9lc6t" event={"ID":"d7869f58-eb4c-49a5-addf-c157afcb109b","Type":"ContainerDied","Data":"b768375832f3ff94be486b7e2e9563a7c9f17008795cd518b18c8260563de2a9"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.164850 4687 generic.go:334] "Generic (PLEG): container finished" podID="84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" containerID="25224d9f872c79f9893a29df95451c820a490b86a01c20a6d5ec249dc8c1ee0a" exitCode=0 Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.164939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vv8qj" event={"ID":"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b","Type":"ContainerDied","Data":"25224d9f872c79f9893a29df95451c820a490b86a01c20a6d5ec249dc8c1ee0a"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.164965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vv8qj" event={"ID":"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b","Type":"ContainerStarted","Data":"f53a0b6979c74041a263dbd57b2ca4965dfb511b40ce061242596319e1c63396"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.166491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5f9-account-create-update-4s62g" event={"ID":"7c132350-8fd3-41bb-90a9-cee0bd08112f","Type":"ContainerStarted","Data":"83e9d2efe9cb1d4a2040ef0d9d31be5c6990d8b674506805bc8e90a6c5a0cede"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.166547 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5f9-account-create-update-4s62g" event={"ID":"7c132350-8fd3-41bb-90a9-cee0bd08112f","Type":"ContainerStarted","Data":"6aeeefc3ec82835957ca48459af5fc92276d6db6facfbd3fcf3d9b0736aaea3b"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.174062 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckh9s\" (UniqueName: \"kubernetes.io/projected/d038a44f-695f-49b3-8878-f7bee07fd444-kube-api-access-ckh9s\") pod \"mysqld-exporter-9858-account-create-update-95r55\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.174291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d038a44f-695f-49b3-8878-f7bee07fd444-operator-scripts\") pod \"mysqld-exporter-9858-account-create-update-95r55\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.175215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d038a44f-695f-49b3-8878-f7bee07fd444-operator-scripts\") pod \"mysqld-exporter-9858-account-create-update-95r55\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.175258 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc0-account-create-update-mwgwc" event={"ID":"a1799451-d37d-4851-93e4-cb394f3d1739","Type":"ContainerStarted","Data":"d1f12555ea0196703321946d370ece5fd948a2a83b16cc82cd855fdeb8eafaad"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.175312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc0-account-create-update-mwgwc" event={"ID":"a1799451-d37d-4851-93e4-cb394f3d1739","Type":"ContainerStarted","Data":"db4e116a0de4d443e0aacfc45f6dc5fb7cbc8de807195d69cf6dc33036992d3e"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.187107 4687 generic.go:334] "Generic (PLEG): container finished" podID="59a6e134-b880-4c0b-af7a-f14d5ecaca30" containerID="7172ec7e3f01eeb7591852362a5baee513b172d794fc0a9d1fba680584d098f7" exitCode=0 Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.187262 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mx5f2" event={"ID":"59a6e134-b880-4c0b-af7a-f14d5ecaca30","Type":"ContainerDied","Data":"7172ec7e3f01eeb7591852362a5baee513b172d794fc0a9d1fba680584d098f7"} Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.194788 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-d8vg2" podStartSLOduration=2.194763552 podStartE2EDuration="2.194763552s" podCreationTimestamp="2026-03-12 16:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:44.186190186 +0000 UTC m=+1273.150152530" watchObservedRunningTime="2026-03-12 16:23:44.194763552 +0000 UTC m=+1273.158725896" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.204323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckh9s\" (UniqueName: \"kubernetes.io/projected/d038a44f-695f-49b3-8878-f7bee07fd444-kube-api-access-ckh9s\") pod \"mysqld-exporter-9858-account-create-update-95r55\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.235036 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d5f9-account-create-update-4s62g" podStartSLOduration=2.234987238 podStartE2EDuration="2.234987238s" podCreationTimestamp="2026-03-12 16:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:44.20669057 +0000 UTC m=+1273.170652914" watchObservedRunningTime="2026-03-12 16:23:44.234987238 +0000 UTC m=+1273.198949592" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.283818 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dc0-account-create-update-mwgwc" podStartSLOduration=2.28379934 podStartE2EDuration="2.28379934s" podCreationTimestamp="2026-03-12 16:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:44.240244362 +0000 UTC m=+1273.204206706" watchObservedRunningTime="2026-03-12 16:23:44.28379934 +0000 UTC m=+1273.247761684" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.338612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:44 crc kubenswrapper[4687]: I0312 16:23:44.342471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.220821 4687 generic.go:334] "Generic (PLEG): container finished" podID="a1799451-d37d-4851-93e4-cb394f3d1739" containerID="d1f12555ea0196703321946d370ece5fd948a2a83b16cc82cd855fdeb8eafaad" exitCode=0 Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.220915 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc0-account-create-update-mwgwc" event={"ID":"a1799451-d37d-4851-93e4-cb394f3d1739","Type":"ContainerDied","Data":"d1f12555ea0196703321946d370ece5fd948a2a83b16cc82cd855fdeb8eafaad"} Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.224986 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerStarted","Data":"f46d0e7a828227b4eaf07d482669324d5960c3f33f8fb3e70abf1290c49080e7"} Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.227109 4687 generic.go:334] "Generic (PLEG): container finished" podID="39486168-4d75-4418-a2d8-576c03bb7743" containerID="34520867e9b3da785ebe8fdfd742448d9cd815348ccf59cb11f77414ece37e47" exitCode=0 Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.227166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8vg2" event={"ID":"39486168-4d75-4418-a2d8-576c03bb7743","Type":"ContainerDied","Data":"34520867e9b3da785ebe8fdfd742448d9cd815348ccf59cb11f77414ece37e47"} Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.228521 4687 generic.go:334] "Generic (PLEG): container finished" podID="7c132350-8fd3-41bb-90a9-cee0bd08112f" containerID="83e9d2efe9cb1d4a2040ef0d9d31be5c6990d8b674506805bc8e90a6c5a0cede" exitCode=0 Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.228677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5f9-account-create-update-4s62g" event={"ID":"7c132350-8fd3-41bb-90a9-cee0bd08112f","Type":"ContainerDied","Data":"83e9d2efe9cb1d4a2040ef0d9d31be5c6990d8b674506805bc8e90a6c5a0cede"} Mar 12 16:23:45 crc kubenswrapper[4687]: I0312 16:23:45.228882 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" containerName="dnsmasq-dns" containerID="cri-o://bf5c92d5efe1587e1b2d5609a72ff15e6ac24570499aa5a470475220f73dd7c0" gracePeriod=10 Mar 12 16:23:46 crc kubenswrapper[4687]: I0312 16:23:46.243619 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce59e430-0056-4fc3-88a0-333516c500a4" containerID="bf5c92d5efe1587e1b2d5609a72ff15e6ac24570499aa5a470475220f73dd7c0" exitCode=0 Mar 12 16:23:46 crc kubenswrapper[4687]: I0312 16:23:46.243723 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" event={"ID":"ce59e430-0056-4fc3-88a0-333516c500a4","Type":"ContainerDied","Data":"bf5c92d5efe1587e1b2d5609a72ff15e6ac24570499aa5a470475220f73dd7c0"} Mar 12 16:23:47 crc kubenswrapper[4687]: I0312 16:23:47.887566 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:47 crc kubenswrapper[4687]: I0312 16:23:47.898423 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:47 crc kubenswrapper[4687]: I0312 16:23:47.909881 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:47 crc kubenswrapper[4687]: I0312 16:23:47.911851 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:47 crc kubenswrapper[4687]: I0312 16:23:47.956256 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:47 crc kubenswrapper[4687]: I0312 16:23:47.956643 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.003874 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-operator-scripts\") pod \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.004011 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1799451-d37d-4851-93e4-cb394f3d1739-operator-scripts\") pod \"a1799451-d37d-4851-93e4-cb394f3d1739\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.004083 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a6e134-b880-4c0b-af7a-f14d5ecaca30-operator-scripts\") pod \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.004386 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxp7\" (UniqueName: \"kubernetes.io/projected/59a6e134-b880-4c0b-af7a-f14d5ecaca30-kube-api-access-cvxp7\") pod \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\" (UID: \"59a6e134-b880-4c0b-af7a-f14d5ecaca30\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.004517 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6wp\" (UniqueName: \"kubernetes.io/projected/a1799451-d37d-4851-93e4-cb394f3d1739-kube-api-access-hc6wp\") pod \"a1799451-d37d-4851-93e4-cb394f3d1739\" (UID: \"a1799451-d37d-4851-93e4-cb394f3d1739\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.004600 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw5z7\" (UniqueName: \"kubernetes.io/projected/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-kube-api-access-zw5z7\") pod \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\" (UID: \"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.006680 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a6e134-b880-4c0b-af7a-f14d5ecaca30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59a6e134-b880-4c0b-af7a-f14d5ecaca30" (UID: "59a6e134-b880-4c0b-af7a-f14d5ecaca30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.007104 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1799451-d37d-4851-93e4-cb394f3d1739-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1799451-d37d-4851-93e4-cb394f3d1739" (UID: "a1799451-d37d-4851-93e4-cb394f3d1739"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.007427 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" (UID: "84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.009854 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1799451-d37d-4851-93e4-cb394f3d1739-kube-api-access-hc6wp" (OuterVolumeSpecName: "kube-api-access-hc6wp") pod "a1799451-d37d-4851-93e4-cb394f3d1739" (UID: "a1799451-d37d-4851-93e4-cb394f3d1739"). InnerVolumeSpecName "kube-api-access-hc6wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.011026 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-kube-api-access-zw5z7" (OuterVolumeSpecName: "kube-api-access-zw5z7") pod "84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" (UID: "84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b"). InnerVolumeSpecName "kube-api-access-zw5z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.011869 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a6e134-b880-4c0b-af7a-f14d5ecaca30-kube-api-access-cvxp7" (OuterVolumeSpecName: "kube-api-access-cvxp7") pod "59a6e134-b880-4c0b-af7a-f14d5ecaca30" (UID: "59a6e134-b880-4c0b-af7a-f14d5ecaca30"). InnerVolumeSpecName "kube-api-access-cvxp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.066163 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.107997 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7n5p\" (UniqueName: \"kubernetes.io/projected/d7869f58-eb4c-49a5-addf-c157afcb109b-kube-api-access-n7n5p\") pod \"d7869f58-eb4c-49a5-addf-c157afcb109b\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.108286 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39486168-4d75-4418-a2d8-576c03bb7743-operator-scripts\") pod \"39486168-4d75-4418-a2d8-576c03bb7743\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.108370 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7869f58-eb4c-49a5-addf-c157afcb109b-operator-scripts\") pod \"d7869f58-eb4c-49a5-addf-c157afcb109b\" (UID: \"d7869f58-eb4c-49a5-addf-c157afcb109b\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.108491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j84h\" (UniqueName: \"kubernetes.io/projected/7c132350-8fd3-41bb-90a9-cee0bd08112f-kube-api-access-5j84h\") pod \"7c132350-8fd3-41bb-90a9-cee0bd08112f\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.108590 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5rs6\" (UniqueName: \"kubernetes.io/projected/39486168-4d75-4418-a2d8-576c03bb7743-kube-api-access-k5rs6\") pod \"39486168-4d75-4418-a2d8-576c03bb7743\" (UID: \"39486168-4d75-4418-a2d8-576c03bb7743\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.108721 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c132350-8fd3-41bb-90a9-cee0bd08112f-operator-scripts\") pod \"7c132350-8fd3-41bb-90a9-cee0bd08112f\" (UID: \"7c132350-8fd3-41bb-90a9-cee0bd08112f\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109230 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39486168-4d75-4418-a2d8-576c03bb7743-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39486168-4d75-4418-a2d8-576c03bb7743" (UID: "39486168-4d75-4418-a2d8-576c03bb7743"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109448 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1799451-d37d-4851-93e4-cb394f3d1739-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109468 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39486168-4d75-4418-a2d8-576c03bb7743-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109477 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a6e134-b880-4c0b-af7a-f14d5ecaca30-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109488 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxp7\" (UniqueName: \"kubernetes.io/projected/59a6e134-b880-4c0b-af7a-f14d5ecaca30-kube-api-access-cvxp7\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109499 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6wp\" (UniqueName: \"kubernetes.io/projected/a1799451-d37d-4851-93e4-cb394f3d1739-kube-api-access-hc6wp\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109507 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw5z7\" (UniqueName: \"kubernetes.io/projected/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-kube-api-access-zw5z7\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109515 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.109985 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c132350-8fd3-41bb-90a9-cee0bd08112f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c132350-8fd3-41bb-90a9-cee0bd08112f" (UID: "7c132350-8fd3-41bb-90a9-cee0bd08112f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.113052 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7869f58-eb4c-49a5-addf-c157afcb109b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7869f58-eb4c-49a5-addf-c157afcb109b" (UID: "d7869f58-eb4c-49a5-addf-c157afcb109b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.118235 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c132350-8fd3-41bb-90a9-cee0bd08112f-kube-api-access-5j84h" (OuterVolumeSpecName: "kube-api-access-5j84h") pod "7c132350-8fd3-41bb-90a9-cee0bd08112f" (UID: "7c132350-8fd3-41bb-90a9-cee0bd08112f"). InnerVolumeSpecName "kube-api-access-5j84h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.120184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39486168-4d75-4418-a2d8-576c03bb7743-kube-api-access-k5rs6" (OuterVolumeSpecName: "kube-api-access-k5rs6") pod "39486168-4d75-4418-a2d8-576c03bb7743" (UID: "39486168-4d75-4418-a2d8-576c03bb7743"). InnerVolumeSpecName "kube-api-access-k5rs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.120712 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7869f58-eb4c-49a5-addf-c157afcb109b-kube-api-access-n7n5p" (OuterVolumeSpecName: "kube-api-access-n7n5p") pod "d7869f58-eb4c-49a5-addf-c157afcb109b" (UID: "d7869f58-eb4c-49a5-addf-c157afcb109b"). InnerVolumeSpecName "kube-api-access-n7n5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.210409 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xx7n\" (UniqueName: \"kubernetes.io/projected/ce59e430-0056-4fc3-88a0-333516c500a4-kube-api-access-5xx7n\") pod \"ce59e430-0056-4fc3-88a0-333516c500a4\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.210499 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-config\") pod \"ce59e430-0056-4fc3-88a0-333516c500a4\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.210555 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-dns-svc\") pod \"ce59e430-0056-4fc3-88a0-333516c500a4\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.210603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-ovsdbserver-sb\") pod \"ce59e430-0056-4fc3-88a0-333516c500a4\" (UID: \"ce59e430-0056-4fc3-88a0-333516c500a4\") " Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.211175 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7869f58-eb4c-49a5-addf-c157afcb109b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.211192 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j84h\" (UniqueName: \"kubernetes.io/projected/7c132350-8fd3-41bb-90a9-cee0bd08112f-kube-api-access-5j84h\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.211202 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5rs6\" (UniqueName: \"kubernetes.io/projected/39486168-4d75-4418-a2d8-576c03bb7743-kube-api-access-k5rs6\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.211213 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c132350-8fd3-41bb-90a9-cee0bd08112f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.211221 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7n5p\" (UniqueName: \"kubernetes.io/projected/d7869f58-eb4c-49a5-addf-c157afcb109b-kube-api-access-n7n5p\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.215187 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce59e430-0056-4fc3-88a0-333516c500a4-kube-api-access-5xx7n" (OuterVolumeSpecName: "kube-api-access-5xx7n") pod "ce59e430-0056-4fc3-88a0-333516c500a4" (UID: "ce59e430-0056-4fc3-88a0-333516c500a4"). InnerVolumeSpecName "kube-api-access-5xx7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.260408 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-config" (OuterVolumeSpecName: "config") pod "ce59e430-0056-4fc3-88a0-333516c500a4" (UID: "ce59e430-0056-4fc3-88a0-333516c500a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.264605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce59e430-0056-4fc3-88a0-333516c500a4" (UID: "ce59e430-0056-4fc3-88a0-333516c500a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.272382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3eed-account-create-update-9lc6t" event={"ID":"d7869f58-eb4c-49a5-addf-c157afcb109b","Type":"ContainerDied","Data":"7bab96087f3389f4828012a8a3aa6b2b4f0d7b3b00f5de2c14dee62b31035029"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.272431 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bab96087f3389f4828012a8a3aa6b2b4f0d7b3b00f5de2c14dee62b31035029" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.272456 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3eed-account-create-update-9lc6t" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.274650 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vv8qj" event={"ID":"84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b","Type":"ContainerDied","Data":"f53a0b6979c74041a263dbd57b2ca4965dfb511b40ce061242596319e1c63396"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.274690 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53a0b6979c74041a263dbd57b2ca4965dfb511b40ce061242596319e1c63396" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.274763 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vv8qj" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.277620 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hm5rd" event={"ID":"b1b1f68f-7bdc-4437-922c-d0abc47c639c","Type":"ContainerStarted","Data":"3e30898e0b39483d9051dfb1f67ff164833ae2830b075ae66f897ed3a0017b60"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.281422 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d5f9-account-create-update-4s62g" event={"ID":"7c132350-8fd3-41bb-90a9-cee0bd08112f","Type":"ContainerDied","Data":"6aeeefc3ec82835957ca48459af5fc92276d6db6facfbd3fcf3d9b0736aaea3b"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.281444 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d5f9-account-create-update-4s62g" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.281462 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aeeefc3ec82835957ca48459af5fc92276d6db6facfbd3fcf3d9b0736aaea3b" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.287132 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce59e430-0056-4fc3-88a0-333516c500a4" (UID: "ce59e430-0056-4fc3-88a0-333516c500a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.293322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" event={"ID":"ce59e430-0056-4fc3-88a0-333516c500a4","Type":"ContainerDied","Data":"d0ee580e84ffd81c823c586af65eea887f496d903dd9ddcf4460c908abf56253"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.293409 4687 scope.go:117] "RemoveContainer" containerID="bf5c92d5efe1587e1b2d5609a72ff15e6ac24570499aa5a470475220f73dd7c0" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.293959 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-ct4m2" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.300432 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hm5rd" podStartSLOduration=2.102902364 podStartE2EDuration="10.30041989s" podCreationTimestamp="2026-03-12 16:23:38 +0000 UTC" firstStartedPulling="2026-03-12 16:23:39.633091777 +0000 UTC m=+1268.597054121" lastFinishedPulling="2026-03-12 16:23:47.830609303 +0000 UTC m=+1276.794571647" observedRunningTime="2026-03-12 16:23:48.295238927 +0000 UTC m=+1277.259201271" watchObservedRunningTime="2026-03-12 16:23:48.30041989 +0000 UTC m=+1277.264382234" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.306004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc0-account-create-update-mwgwc" event={"ID":"a1799451-d37d-4851-93e4-cb394f3d1739","Type":"ContainerDied","Data":"db4e116a0de4d443e0aacfc45f6dc5fb7cbc8de807195d69cf6dc33036992d3e"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.306037 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4e116a0de4d443e0aacfc45f6dc5fb7cbc8de807195d69cf6dc33036992d3e" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.306095 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc0-account-create-update-mwgwc" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.308899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mx5f2" event={"ID":"59a6e134-b880-4c0b-af7a-f14d5ecaca30","Type":"ContainerDied","Data":"9400d51fdc46d2378b1d569239a26c7c30f1936b7aa957030b0349f28a90b49f"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.308941 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9400d51fdc46d2378b1d569239a26c7c30f1936b7aa957030b0349f28a90b49f" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.309015 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mx5f2" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.316142 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xx7n\" (UniqueName: \"kubernetes.io/projected/ce59e430-0056-4fc3-88a0-333516c500a4-kube-api-access-5xx7n\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.316160 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.316170 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.316179 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce59e430-0056-4fc3-88a0-333516c500a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.317144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8vg2" event={"ID":"39486168-4d75-4418-a2d8-576c03bb7743","Type":"ContainerDied","Data":"bafa75477f9b704716decede418adb64f5b0c8c2d5e41f36e10f9aee05d687c1"} Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.317176 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafa75477f9b704716decede418adb64f5b0c8c2d5e41f36e10f9aee05d687c1" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.317220 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8vg2" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.325453 4687 scope.go:117] "RemoveContainer" containerID="ca9ade217ecbd8e8f0bba24da37ea35a76f654700201de0018117c1c62f31ad5" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.362343 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ct4m2"] Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.373651 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-ct4m2"] Mar 12 16:23:48 crc kubenswrapper[4687]: W0312 16:23:48.381666 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode11478c5_fa5a_4676_a297_ca9b2db901e6.slice/crio-936b8ec9c78b1a4d001c423e63bbb4f7f4d7bf3b6383c82b706651d1d12e48cc WatchSource:0}: Error finding container 936b8ec9c78b1a4d001c423e63bbb4f7f4d7bf3b6383c82b706651d1d12e48cc: Status 404 returned error can't find the container with id 936b8ec9c78b1a4d001c423e63bbb4f7f4d7bf3b6383c82b706651d1d12e48cc Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.383145 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7zgtv"] Mar 12 16:23:48 crc kubenswrapper[4687]: W0312 16:23:48.390690 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd038a44f_695f_49b3_8878_f7bee07fd444.slice/crio-2671721e1ef0691a223bc9a7802274978b745f2905dfbad6bcd0e87d08c1923a WatchSource:0}: Error finding container 2671721e1ef0691a223bc9a7802274978b745f2905dfbad6bcd0e87d08c1923a: Status 404 returned error can't find the container with id 2671721e1ef0691a223bc9a7802274978b745f2905dfbad6bcd0e87d08c1923a Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.392631 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9858-account-create-update-95r55"] Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.445813 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9grw9"] Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.446733 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a6e134-b880-4c0b-af7a-f14d5ecaca30" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.446801 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a6e134-b880-4c0b-af7a-f14d5ecaca30" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.446867 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39486168-4d75-4418-a2d8-576c03bb7743" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.446937 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="39486168-4d75-4418-a2d8-576c03bb7743" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.447063 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1799451-d37d-4851-93e4-cb394f3d1739" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.447114 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1799451-d37d-4851-93e4-cb394f3d1739" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.447182 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" containerName="init" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.447233 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" containerName="init" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.447442 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7869f58-eb4c-49a5-addf-c157afcb109b" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.447513 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7869f58-eb4c-49a5-addf-c157afcb109b" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.447582 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" containerName="dnsmasq-dns" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.447632 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" containerName="dnsmasq-dns" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.447693 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.447748 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: E0312 16:23:48.447807 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c132350-8fd3-41bb-90a9-cee0bd08112f" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.447865 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c132350-8fd3-41bb-90a9-cee0bd08112f" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448155 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448240 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c132350-8fd3-41bb-90a9-cee0bd08112f" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448302 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a6e134-b880-4c0b-af7a-f14d5ecaca30" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448380 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7869f58-eb4c-49a5-addf-c157afcb109b" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448451 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" containerName="dnsmasq-dns" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448538 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1799451-d37d-4851-93e4-cb394f3d1739" containerName="mariadb-account-create-update" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.448604 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="39486168-4d75-4418-a2d8-576c03bb7743" containerName="mariadb-database-create" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.449336 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.451658 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.474826 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9grw9"] Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.622688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-operator-scripts\") pod \"root-account-create-update-9grw9\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.622912 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp8rs\" (UniqueName: \"kubernetes.io/projected/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-kube-api-access-lp8rs\") pod \"root-account-create-update-9grw9\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.725331 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-operator-scripts\") pod \"root-account-create-update-9grw9\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.725544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp8rs\" (UniqueName: \"kubernetes.io/projected/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-kube-api-access-lp8rs\") pod \"root-account-create-update-9grw9\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.726079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-operator-scripts\") pod \"root-account-create-update-9grw9\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.743713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp8rs\" (UniqueName: \"kubernetes.io/projected/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-kube-api-access-lp8rs\") pod \"root-account-create-update-9grw9\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:48 crc kubenswrapper[4687]: I0312 16:23:48.777449 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.229851 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9grw9"] Mar 12 16:23:49 crc kubenswrapper[4687]: W0312 16:23:49.241586 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d67ee6b_53d1_4525_b2bd_5d918a9651bb.slice/crio-0345485756ba042970b049c326760ac2c205233067c9975f4dd95402e167bac7 WatchSource:0}: Error finding container 0345485756ba042970b049c326760ac2c205233067c9975f4dd95402e167bac7: Status 404 returned error can't find the container with id 0345485756ba042970b049c326760ac2c205233067c9975f4dd95402e167bac7 Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.333865 4687 generic.go:334] "Generic (PLEG): container finished" podID="e11478c5-fa5a-4676-a297-ca9b2db901e6" containerID="4d3c6d3f811b2ab6c359bcc5540dd7bc5178e3addaed06480092f8f35c00a5b0" exitCode=0 Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.333988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" event={"ID":"e11478c5-fa5a-4676-a297-ca9b2db901e6","Type":"ContainerDied","Data":"4d3c6d3f811b2ab6c359bcc5540dd7bc5178e3addaed06480092f8f35c00a5b0"} Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.334031 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" event={"ID":"e11478c5-fa5a-4676-a297-ca9b2db901e6","Type":"ContainerStarted","Data":"936b8ec9c78b1a4d001c423e63bbb4f7f4d7bf3b6383c82b706651d1d12e48cc"} Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.335255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9grw9" event={"ID":"1d67ee6b-53d1-4525-b2bd-5d918a9651bb","Type":"ContainerStarted","Data":"0345485756ba042970b049c326760ac2c205233067c9975f4dd95402e167bac7"} Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.339456 4687 generic.go:334] "Generic (PLEG): container finished" podID="d038a44f-695f-49b3-8878-f7bee07fd444" containerID="4dddb16df8822f7c3c92115d4a93f30b427ba29980676d0d9012bd6624bb432e" exitCode=0 Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.339689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" event={"ID":"d038a44f-695f-49b3-8878-f7bee07fd444","Type":"ContainerDied","Data":"4dddb16df8822f7c3c92115d4a93f30b427ba29980676d0d9012bd6624bb432e"} Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.339726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" event={"ID":"d038a44f-695f-49b3-8878-f7bee07fd444","Type":"ContainerStarted","Data":"2671721e1ef0691a223bc9a7802274978b745f2905dfbad6bcd0e87d08c1923a"} Mar 12 16:23:49 crc kubenswrapper[4687]: I0312 16:23:49.753793 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce59e430-0056-4fc3-88a0-333516c500a4" path="/var/lib/kubelet/pods/ce59e430-0056-4fc3-88a0-333516c500a4/volumes" Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.353120 4687 generic.go:334] "Generic (PLEG): container finished" podID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerID="be93c8a792c8bdf44ed1a5742f64d7c5582812aaca1bff03b025d337a8592e0f" exitCode=0 Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.353188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"4ee0a72c-d057-4042-8611-c509c4ed3edf","Type":"ContainerDied","Data":"be93c8a792c8bdf44ed1a5742f64d7c5582812aaca1bff03b025d337a8592e0f"} Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.358924 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e395399-5a85-445f-9b56-4036687b73b6" containerID="a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878" exitCode=0 Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.359033 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4e395399-5a85-445f-9b56-4036687b73b6","Type":"ContainerDied","Data":"a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878"} Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.361742 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d67ee6b-53d1-4525-b2bd-5d918a9651bb" containerID="4e2270bd184b3941096bbd26dd80a4d17af176b06faf227eb4d015fb52cb0d49" exitCode=0 Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.361810 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9grw9" event={"ID":"1d67ee6b-53d1-4525-b2bd-5d918a9651bb","Type":"ContainerDied","Data":"4e2270bd184b3941096bbd26dd80a4d17af176b06faf227eb4d015fb52cb0d49"} Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.880731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:23:50 crc kubenswrapper[4687]: E0312 16:23:50.880976 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 16:23:50 crc kubenswrapper[4687]: E0312 16:23:50.881105 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 16:23:50 crc kubenswrapper[4687]: E0312 16:23:50.881156 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift podName:97466c9b-724b-4349-8745-8803b025261a nodeName:}" failed. No retries permitted until 2026-03-12 16:24:06.881139632 +0000 UTC m=+1295.845101976 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift") pod "swift-storage-0" (UID: "97466c9b-724b-4349-8745-8803b025261a") : configmap "swift-ring-files" not found Mar 12 16:23:50 crc kubenswrapper[4687]: I0312 16:23:50.900210 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6775b889d8-xvw66" podUID="f94357ad-9ce2-48be-b148-23cb9f8c0621" containerName="console" containerID="cri-o://3e493f37738432e3542fd7fd7ee440101b1fb1f8f45f94af567c20c16470f5cd" gracePeriod=15 Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.373643 4687 generic.go:334] "Generic (PLEG): container finished" podID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerID="539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac" exitCode=0 Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.373822 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ae78a70-0a58-4f39-977b-a4e5ce5b981d","Type":"ContainerDied","Data":"539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac"} Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.384537 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" event={"ID":"d038a44f-695f-49b3-8878-f7bee07fd444","Type":"ContainerDied","Data":"2671721e1ef0691a223bc9a7802274978b745f2905dfbad6bcd0e87d08c1923a"} Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.384574 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2671721e1ef0691a223bc9a7802274978b745f2905dfbad6bcd0e87d08c1923a" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.394806 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6775b889d8-xvw66_f94357ad-9ce2-48be-b148-23cb9f8c0621/console/0.log" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.394865 4687 generic.go:334] "Generic (PLEG): container finished" podID="f94357ad-9ce2-48be-b148-23cb9f8c0621" containerID="3e493f37738432e3542fd7fd7ee440101b1fb1f8f45f94af567c20c16470f5cd" exitCode=2 Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.394912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6775b889d8-xvw66" event={"ID":"f94357ad-9ce2-48be-b148-23cb9f8c0621","Type":"ContainerDied","Data":"3e493f37738432e3542fd7fd7ee440101b1fb1f8f45f94af567c20c16470f5cd"} Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.396720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" event={"ID":"e11478c5-fa5a-4676-a297-ca9b2db901e6","Type":"ContainerDied","Data":"936b8ec9c78b1a4d001c423e63bbb4f7f4d7bf3b6383c82b706651d1d12e48cc"} Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.396760 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936b8ec9c78b1a4d001c423e63bbb4f7f4d7bf3b6383c82b706651d1d12e48cc" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.398305 4687 generic.go:334] "Generic (PLEG): container finished" podID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerID="cbb7ab54062a9c089c1ea7df44f36b511eb1f51b9b3ba64674dd3f4420154282" exitCode=0 Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.398493 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d83f3b6-81ea-457d-815e-22eab66d4058","Type":"ContainerDied","Data":"cbb7ab54062a9c089c1ea7df44f36b511eb1f51b9b3ba64674dd3f4420154282"} Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.650157 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.693508 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.726022 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6775b889d8-xvw66_f94357ad-9ce2-48be-b148-23cb9f8c0621/console/0.log" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.726081 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.811772 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-config\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.812203 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckh9s\" (UniqueName: \"kubernetes.io/projected/d038a44f-695f-49b3-8878-f7bee07fd444-kube-api-access-ckh9s\") pod \"d038a44f-695f-49b3-8878-f7bee07fd444\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.812414 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d038a44f-695f-49b3-8878-f7bee07fd444-operator-scripts\") pod \"d038a44f-695f-49b3-8878-f7bee07fd444\" (UID: \"d038a44f-695f-49b3-8878-f7bee07fd444\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.812547 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs7x4\" (UniqueName: \"kubernetes.io/projected/e11478c5-fa5a-4676-a297-ca9b2db901e6-kube-api-access-bs7x4\") pod \"e11478c5-fa5a-4676-a297-ca9b2db901e6\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.812646 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-service-ca\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.812744 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh25s\" (UniqueName: \"kubernetes.io/projected/f94357ad-9ce2-48be-b148-23cb9f8c0621-kube-api-access-rh25s\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.812908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-trusted-ca-bundle\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.813012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-oauth-serving-cert\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.813092 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e11478c5-fa5a-4676-a297-ca9b2db901e6-operator-scripts\") pod \"e11478c5-fa5a-4676-a297-ca9b2db901e6\" (UID: \"e11478c5-fa5a-4676-a297-ca9b2db901e6\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.813184 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-oauth-config\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.813278 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-serving-cert\") pod \"f94357ad-9ce2-48be-b148-23cb9f8c0621\" (UID: \"f94357ad-9ce2-48be-b148-23cb9f8c0621\") " Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.813081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d038a44f-695f-49b3-8878-f7bee07fd444-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d038a44f-695f-49b3-8878-f7bee07fd444" (UID: "d038a44f-695f-49b3-8878-f7bee07fd444"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.816429 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-config" (OuterVolumeSpecName: "console-config") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.817144 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11478c5-fa5a-4676-a297-ca9b2db901e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e11478c5-fa5a-4676-a297-ca9b2db901e6" (UID: "e11478c5-fa5a-4676-a297-ca9b2db901e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.817815 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.818074 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94357ad-9ce2-48be-b148-23cb9f8c0621-kube-api-access-rh25s" (OuterVolumeSpecName: "kube-api-access-rh25s") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "kube-api-access-rh25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.818320 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.820952 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d038a44f-695f-49b3-8878-f7bee07fd444-kube-api-access-ckh9s" (OuterVolumeSpecName: "kube-api-access-ckh9s") pod "d038a44f-695f-49b3-8878-f7bee07fd444" (UID: "d038a44f-695f-49b3-8878-f7bee07fd444"). InnerVolumeSpecName "kube-api-access-ckh9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.821198 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11478c5-fa5a-4676-a297-ca9b2db901e6-kube-api-access-bs7x4" (OuterVolumeSpecName: "kube-api-access-bs7x4") pod "e11478c5-fa5a-4676-a297-ca9b2db901e6" (UID: "e11478c5-fa5a-4676-a297-ca9b2db901e6"). InnerVolumeSpecName "kube-api-access-bs7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.822614 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.822668 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-service-ca" (OuterVolumeSpecName: "service-ca") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.822827 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f94357ad-9ce2-48be-b148-23cb9f8c0621" (UID: "f94357ad-9ce2-48be-b148-23cb9f8c0621"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.907877 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917090 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917123 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckh9s\" (UniqueName: \"kubernetes.io/projected/d038a44f-695f-49b3-8878-f7bee07fd444-kube-api-access-ckh9s\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917137 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d038a44f-695f-49b3-8878-f7bee07fd444-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917148 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs7x4\" (UniqueName: \"kubernetes.io/projected/e11478c5-fa5a-4676-a297-ca9b2db901e6-kube-api-access-bs7x4\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917161 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917174 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh25s\" (UniqueName: \"kubernetes.io/projected/f94357ad-9ce2-48be-b148-23cb9f8c0621-kube-api-access-rh25s\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917185 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917195 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f94357ad-9ce2-48be-b148-23cb9f8c0621-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917206 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e11478c5-fa5a-4676-a297-ca9b2db901e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917216 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.917227 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f94357ad-9ce2-48be-b148-23cb9f8c0621-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.977828 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9v52n"] Mar 12 16:23:51 crc kubenswrapper[4687]: E0312 16:23:51.978341 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11478c5-fa5a-4676-a297-ca9b2db901e6" containerName="mariadb-database-create" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.978775 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11478c5-fa5a-4676-a297-ca9b2db901e6" containerName="mariadb-database-create" Mar 12 16:23:51 crc kubenswrapper[4687]: E0312 16:23:51.978799 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d038a44f-695f-49b3-8878-f7bee07fd444" containerName="mariadb-account-create-update" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.978821 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d038a44f-695f-49b3-8878-f7bee07fd444" containerName="mariadb-account-create-update" Mar 12 16:23:51 crc kubenswrapper[4687]: E0312 16:23:51.978853 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d67ee6b-53d1-4525-b2bd-5d918a9651bb" containerName="mariadb-account-create-update" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.978861 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d67ee6b-53d1-4525-b2bd-5d918a9651bb" containerName="mariadb-account-create-update" Mar 12 16:23:51 crc kubenswrapper[4687]: E0312 16:23:51.978893 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94357ad-9ce2-48be-b148-23cb9f8c0621" containerName="console" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.978903 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94357ad-9ce2-48be-b148-23cb9f8c0621" containerName="console" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.979143 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d67ee6b-53d1-4525-b2bd-5d918a9651bb" containerName="mariadb-account-create-update" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.979163 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94357ad-9ce2-48be-b148-23cb9f8c0621" containerName="console" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.979182 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11478c5-fa5a-4676-a297-ca9b2db901e6" containerName="mariadb-database-create" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.979197 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d038a44f-695f-49b3-8878-f7bee07fd444" containerName="mariadb-account-create-update" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.980092 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.982509 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gln9t" Mar 12 16:23:51 crc kubenswrapper[4687]: I0312 16:23:51.997146 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.021542 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-operator-scripts\") pod \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.021979 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp8rs\" (UniqueName: \"kubernetes.io/projected/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-kube-api-access-lp8rs\") pod \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\" (UID: \"1d67ee6b-53d1-4525-b2bd-5d918a9651bb\") " Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.022564 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d67ee6b-53d1-4525-b2bd-5d918a9651bb" (UID: "1d67ee6b-53d1-4525-b2bd-5d918a9651bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.023463 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9v52n"] Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.029282 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-kube-api-access-lp8rs" (OuterVolumeSpecName: "kube-api-access-lp8rs") pod "1d67ee6b-53d1-4525-b2bd-5d918a9651bb" (UID: "1d67ee6b-53d1-4525-b2bd-5d918a9651bb"). InnerVolumeSpecName "kube-api-access-lp8rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.124500 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgv8x\" (UniqueName: \"kubernetes.io/projected/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-kube-api-access-fgv8x\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.124605 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-combined-ca-bundle\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.124647 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-db-sync-config-data\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.124785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-config-data\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.124936 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp8rs\" (UniqueName: \"kubernetes.io/projected/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-kube-api-access-lp8rs\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.124959 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d67ee6b-53d1-4525-b2bd-5d918a9651bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.227061 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-combined-ca-bundle\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.227111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-db-sync-config-data\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.227200 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-config-data\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.227313 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgv8x\" (UniqueName: \"kubernetes.io/projected/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-kube-api-access-fgv8x\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.230829 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-db-sync-config-data\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.230973 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-combined-ca-bundle\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.231994 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-config-data\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.248266 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgv8x\" (UniqueName: \"kubernetes.io/projected/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-kube-api-access-fgv8x\") pod \"glance-db-sync-9v52n\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.308893 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9v52n" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.438585 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"4ee0a72c-d057-4042-8611-c509c4ed3edf","Type":"ContainerStarted","Data":"7b0e59304d730b2823fb550356eb276687767419fd27e5af45ae6396cd176e8e"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.438838 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.442882 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6775b889d8-xvw66_f94357ad-9ce2-48be-b148-23cb9f8c0621/console/0.log" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.442950 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6775b889d8-xvw66" event={"ID":"f94357ad-9ce2-48be-b148-23cb9f8c0621","Type":"ContainerDied","Data":"eeda4ea8eeb869d5f952a404396f23f18136643cc933da50d07289ad7a1f92be"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.442985 4687 scope.go:117] "RemoveContainer" containerID="3e493f37738432e3542fd7fd7ee440101b1fb1f8f45f94af567c20c16470f5cd" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.443115 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6775b889d8-xvw66" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.462615 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4e395399-5a85-445f-9b56-4036687b73b6","Type":"ContainerStarted","Data":"aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.463793 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.477483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d83f3b6-81ea-457d-815e-22eab66d4058","Type":"ContainerStarted","Data":"963c2aca180558b10e64cb275b5a9ea2370cabec78549813d99926381e8a452b"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.479925 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.485604 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9grw9" event={"ID":"1d67ee6b-53d1-4525-b2bd-5d918a9651bb","Type":"ContainerDied","Data":"0345485756ba042970b049c326760ac2c205233067c9975f4dd95402e167bac7"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.485652 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0345485756ba042970b049c326760ac2c205233067c9975f4dd95402e167bac7" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.485785 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9grw9" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.497911 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ae78a70-0a58-4f39-977b-a4e5ce5b981d","Type":"ContainerStarted","Data":"c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.499971 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.506639 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=44.392613643 podStartE2EDuration="55.506620842s" podCreationTimestamp="2026-03-12 16:22:57 +0000 UTC" firstStartedPulling="2026-03-12 16:23:05.692424769 +0000 UTC m=+1234.656387113" lastFinishedPulling="2026-03-12 16:23:16.806431968 +0000 UTC m=+1245.770394312" observedRunningTime="2026-03-12 16:23:52.501295065 +0000 UTC m=+1281.465257409" watchObservedRunningTime="2026-03-12 16:23:52.506620842 +0000 UTC m=+1281.470583186" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.519854 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9858-account-create-update-95r55" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.522613 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerStarted","Data":"d52f0097a139c9de43de02718c0d92388285045802fa768d8ae7c15745e24192"} Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.522687 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7zgtv" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.619474 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.842798316 podStartE2EDuration="56.619457943s" podCreationTimestamp="2026-03-12 16:22:56 +0000 UTC" firstStartedPulling="2026-03-12 16:22:58.831569892 +0000 UTC m=+1227.795532226" lastFinishedPulling="2026-03-12 16:23:16.608229509 +0000 UTC m=+1245.572191853" observedRunningTime="2026-03-12 16:23:52.619146755 +0000 UTC m=+1281.583109099" watchObservedRunningTime="2026-03-12 16:23:52.619457943 +0000 UTC m=+1281.583420287" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.640919 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=54.576949433 podStartE2EDuration="55.640898384s" podCreationTimestamp="2026-03-12 16:22:57 +0000 UTC" firstStartedPulling="2026-03-12 16:23:15.74223678 +0000 UTC m=+1244.706199134" lastFinishedPulling="2026-03-12 16:23:16.806185741 +0000 UTC m=+1245.770148085" observedRunningTime="2026-03-12 16:23:52.573339686 +0000 UTC m=+1281.537302030" watchObservedRunningTime="2026-03-12 16:23:52.640898384 +0000 UTC m=+1281.604860728" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.671570 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6775b889d8-xvw66"] Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.684337 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6775b889d8-xvw66"] Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.715895 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.631904399 podStartE2EDuration="55.715865844s" podCreationTimestamp="2026-03-12 16:22:57 +0000 UTC" firstStartedPulling="2026-03-12 16:23:05.72160064 +0000 UTC m=+1234.685562974" lastFinishedPulling="2026-03-12 16:23:16.805562075 +0000 UTC m=+1245.769524419" observedRunningTime="2026-03-12 16:23:52.661261243 +0000 UTC m=+1281.625223587" watchObservedRunningTime="2026-03-12 16:23:52.715865844 +0000 UTC m=+1281.679828188" Mar 12 16:23:52 crc kubenswrapper[4687]: I0312 16:23:52.748748 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.250037772 podStartE2EDuration="49.748720248s" podCreationTimestamp="2026-03-12 16:23:03 +0000 UTC" firstStartedPulling="2026-03-12 16:23:18.060495667 +0000 UTC m=+1247.024458001" lastFinishedPulling="2026-03-12 16:23:51.559178143 +0000 UTC m=+1280.523140477" observedRunningTime="2026-03-12 16:23:52.735086213 +0000 UTC m=+1281.699048557" watchObservedRunningTime="2026-03-12 16:23:52.748720248 +0000 UTC m=+1281.712682592" Mar 12 16:23:53 crc kubenswrapper[4687]: W0312 16:23:53.070299 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeee0bc3f_1e9b_4d05_bdcc_747f6fbe96bc.slice/crio-742c62cc219c9a00afccafad94457e80c6608ce4ecbbbe0536387dd52c67f368 WatchSource:0}: Error finding container 742c62cc219c9a00afccafad94457e80c6608ce4ecbbbe0536387dd52c67f368: Status 404 returned error can't find the container with id 742c62cc219c9a00afccafad94457e80c6608ce4ecbbbe0536387dd52c67f368 Mar 12 16:23:53 crc kubenswrapper[4687]: I0312 16:23:53.079710 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9v52n"] Mar 12 16:23:53 crc kubenswrapper[4687]: I0312 16:23:53.528866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9v52n" event={"ID":"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc","Type":"ContainerStarted","Data":"742c62cc219c9a00afccafad94457e80c6608ce4ecbbbe0536387dd52c67f368"} Mar 12 16:23:53 crc kubenswrapper[4687]: I0312 16:23:53.744114 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94357ad-9ce2-48be-b148-23cb9f8c0621" path="/var/lib/kubelet/pods/f94357ad-9ce2-48be-b148-23cb9f8c0621/volumes" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.140946 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx"] Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.167784 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx"] Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.167874 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.273702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75mp4\" (UniqueName: \"kubernetes.io/projected/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-kube-api-access-75mp4\") pod \"mysqld-exporter-openstack-cell1-db-create-jkbbx\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.273999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-jkbbx\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.345083 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-f8f1-account-create-update-tp6zx"] Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.346595 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.353128 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.357597 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-f8f1-account-create-update-tp6zx"] Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.375834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75mp4\" (UniqueName: \"kubernetes.io/projected/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-kube-api-access-75mp4\") pod \"mysqld-exporter-openstack-cell1-db-create-jkbbx\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.376118 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-jkbbx\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.377082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-jkbbx\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.405997 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75mp4\" (UniqueName: \"kubernetes.io/projected/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-kube-api-access-75mp4\") pod \"mysqld-exporter-openstack-cell1-db-create-jkbbx\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.478899 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqmn\" (UniqueName: \"kubernetes.io/projected/30cd364a-718d-4924-91ec-c98402368e0c-kube-api-access-bbqmn\") pod \"mysqld-exporter-f8f1-account-create-update-tp6zx\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.479127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cd364a-718d-4924-91ec-c98402368e0c-operator-scripts\") pod \"mysqld-exporter-f8f1-account-create-update-tp6zx\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.496106 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.581880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cd364a-718d-4924-91ec-c98402368e0c-operator-scripts\") pod \"mysqld-exporter-f8f1-account-create-update-tp6zx\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.582158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqmn\" (UniqueName: \"kubernetes.io/projected/30cd364a-718d-4924-91ec-c98402368e0c-kube-api-access-bbqmn\") pod \"mysqld-exporter-f8f1-account-create-update-tp6zx\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.582563 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cd364a-718d-4924-91ec-c98402368e0c-operator-scripts\") pod \"mysqld-exporter-f8f1-account-create-update-tp6zx\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.604601 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqmn\" (UniqueName: \"kubernetes.io/projected/30cd364a-718d-4924-91ec-c98402368e0c-kube-api-access-bbqmn\") pod \"mysqld-exporter-f8f1-account-create-update-tp6zx\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.662281 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.702517 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9grw9"] Mar 12 16:23:54 crc kubenswrapper[4687]: I0312 16:23:54.719378 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9grw9"] Mar 12 16:23:55 crc kubenswrapper[4687]: I0312 16:23:55.000389 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 12 16:23:55 crc kubenswrapper[4687]: I0312 16:23:55.679816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 16:23:55 crc kubenswrapper[4687]: I0312 16:23:55.762017 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d67ee6b-53d1-4525-b2bd-5d918a9651bb" path="/var/lib/kubelet/pods/1d67ee6b-53d1-4525-b2bd-5d918a9651bb/volumes" Mar 12 16:23:56 crc kubenswrapper[4687]: I0312 16:23:56.245563 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9x5hb" podUID="7c43d5a9-eafe-4910-acf5-0502509982b3" containerName="ovn-controller" probeResult="failure" output=< Mar 12 16:23:56 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 16:23:56 crc kubenswrapper[4687]: > Mar 12 16:23:56 crc kubenswrapper[4687]: I0312 16:23:56.950033 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-f8f1-account-create-update-tp6zx"] Mar 12 16:23:57 crc kubenswrapper[4687]: I0312 16:23:57.028280 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx"] Mar 12 16:23:57 crc kubenswrapper[4687]: W0312 16:23:57.035268 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d87d3a2_3fe6_4d8b_9fc5_08bfb4ee5c62.slice/crio-981f6a5ccd0bcc9be647ae6c20718232cf3a657735fd56932f0799b4b4b794a2 WatchSource:0}: Error finding container 981f6a5ccd0bcc9be647ae6c20718232cf3a657735fd56932f0799b4b4b794a2: Status 404 returned error can't find the container with id 981f6a5ccd0bcc9be647ae6c20718232cf3a657735fd56932f0799b4b4b794a2 Mar 12 16:23:57 crc kubenswrapper[4687]: I0312 16:23:57.570999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" event={"ID":"30cd364a-718d-4924-91ec-c98402368e0c","Type":"ContainerStarted","Data":"4a2a28e9840c3747e005167a1379d64bcf5372546be9fa387dda3079818eff73"} Mar 12 16:23:57 crc kubenswrapper[4687]: I0312 16:23:57.573014 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" event={"ID":"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62","Type":"ContainerStarted","Data":"981f6a5ccd0bcc9be647ae6c20718232cf3a657735fd56932f0799b4b4b794a2"} Mar 12 16:23:58 crc kubenswrapper[4687]: I0312 16:23:58.583409 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" event={"ID":"30cd364a-718d-4924-91ec-c98402368e0c","Type":"ContainerStarted","Data":"084532cf29d593d9392bb6b0b721e6aa46235245effdbbf003056c27ac60cca9"} Mar 12 16:23:58 crc kubenswrapper[4687]: I0312 16:23:58.586088 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" event={"ID":"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62","Type":"ContainerStarted","Data":"613f2c31eaa4b8533a0b484844a67ee2c8688b1222e88e253dd8c66f1f372f4c"} Mar 12 16:23:58 crc kubenswrapper[4687]: I0312 16:23:58.618736 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" podStartSLOduration=4.618715613 podStartE2EDuration="4.618715613s" podCreationTimestamp="2026-03-12 16:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:58.602755624 +0000 UTC m=+1287.566717968" watchObservedRunningTime="2026-03-12 16:23:58.618715613 +0000 UTC m=+1287.582677957" Mar 12 16:23:58 crc kubenswrapper[4687]: I0312 16:23:58.620950 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" podStartSLOduration=4.620934744 podStartE2EDuration="4.620934744s" podCreationTimestamp="2026-03-12 16:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:23:58.619582057 +0000 UTC m=+1287.583544401" watchObservedRunningTime="2026-03-12 16:23:58.620934744 +0000 UTC m=+1287.584897088" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.597395 4687 generic.go:334] "Generic (PLEG): container finished" podID="4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" containerID="613f2c31eaa4b8533a0b484844a67ee2c8688b1222e88e253dd8c66f1f372f4c" exitCode=0 Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.597457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" event={"ID":"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62","Type":"ContainerDied","Data":"613f2c31eaa4b8533a0b484844a67ee2c8688b1222e88e253dd8c66f1f372f4c"} Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.704313 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-89qs5"] Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.706192 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-89qs5" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.709895 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.727830 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-89qs5"] Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.809623 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da6dedc-beb0-4108-839d-f234a2cf3caf-operator-scripts\") pod \"root-account-create-update-89qs5\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " pod="openstack/root-account-create-update-89qs5" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.810079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tct\" (UniqueName: \"kubernetes.io/projected/0da6dedc-beb0-4108-839d-f234a2cf3caf-kube-api-access-27tct\") pod \"root-account-create-update-89qs5\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " pod="openstack/root-account-create-update-89qs5" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.912407 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da6dedc-beb0-4108-839d-f234a2cf3caf-operator-scripts\") pod \"root-account-create-update-89qs5\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " pod="openstack/root-account-create-update-89qs5" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.912567 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tct\" (UniqueName: \"kubernetes.io/projected/0da6dedc-beb0-4108-839d-f234a2cf3caf-kube-api-access-27tct\") pod \"root-account-create-update-89qs5\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " pod="openstack/root-account-create-update-89qs5" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.913529 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da6dedc-beb0-4108-839d-f234a2cf3caf-operator-scripts\") pod \"root-account-create-update-89qs5\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " pod="openstack/root-account-create-update-89qs5" Mar 12 16:23:59 crc kubenswrapper[4687]: I0312 16:23:59.946313 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tct\" (UniqueName: \"kubernetes.io/projected/0da6dedc-beb0-4108-839d-f234a2cf3caf-kube-api-access-27tct\") pod \"root-account-create-update-89qs5\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " pod="openstack/root-account-create-update-89qs5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.028445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-89qs5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.146962 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555544-wmmm5"] Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.148217 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.153216 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.153531 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.153707 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.156811 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555544-wmmm5"] Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.220194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7sc\" (UniqueName: \"kubernetes.io/projected/9b2e2820-00f3-4111-96a4-4d378529f9fd-kube-api-access-fv7sc\") pod \"auto-csr-approver-29555544-wmmm5\" (UID: \"9b2e2820-00f3-4111-96a4-4d378529f9fd\") " pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.322507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7sc\" (UniqueName: \"kubernetes.io/projected/9b2e2820-00f3-4111-96a4-4d378529f9fd-kube-api-access-fv7sc\") pod \"auto-csr-approver-29555544-wmmm5\" (UID: \"9b2e2820-00f3-4111-96a4-4d378529f9fd\") " pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.340571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7sc\" (UniqueName: \"kubernetes.io/projected/9b2e2820-00f3-4111-96a4-4d378529f9fd-kube-api-access-fv7sc\") pod \"auto-csr-approver-29555544-wmmm5\" (UID: \"9b2e2820-00f3-4111-96a4-4d378529f9fd\") " pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.468279 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:00 crc kubenswrapper[4687]: I0312 16:24:00.724950 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-89qs5"] Mar 12 16:24:00 crc kubenswrapper[4687]: W0312 16:24:00.809015 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0da6dedc_beb0_4108_839d_f234a2cf3caf.slice/crio-0be654a8ce75a25dca167a3b57c691ea0f96cf181374e27358da6fbca6c13afa WatchSource:0}: Error finding container 0be654a8ce75a25dca167a3b57c691ea0f96cf181374e27358da6fbca6c13afa: Status 404 returned error can't find the container with id 0be654a8ce75a25dca167a3b57c691ea0f96cf181374e27358da6fbca6c13afa Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.113207 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555544-wmmm5"] Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.245240 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9x5hb" podUID="7c43d5a9-eafe-4910-acf5-0502509982b3" containerName="ovn-controller" probeResult="failure" output=< Mar 12 16:24:01 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 16:24:01 crc kubenswrapper[4687]: > Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.318889 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.324287 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jwnk7" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.351214 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.408768 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-operator-scripts\") pod \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.408908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75mp4\" (UniqueName: \"kubernetes.io/projected/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-kube-api-access-75mp4\") pod \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\" (UID: \"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62\") " Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.409683 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" (UID: "4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.426832 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-kube-api-access-75mp4" (OuterVolumeSpecName: "kube-api-access-75mp4") pod "4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" (UID: "4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62"). InnerVolumeSpecName "kube-api-access-75mp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.511810 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.511839 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75mp4\" (UniqueName: \"kubernetes.io/projected/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62-kube-api-access-75mp4\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.563673 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9x5hb-config-dgd95"] Mar 12 16:24:01 crc kubenswrapper[4687]: E0312 16:24:01.564197 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" containerName="mariadb-database-create" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.564259 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" containerName="mariadb-database-create" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.564515 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" containerName="mariadb-database-create" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.565284 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.567803 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.581319 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9x5hb-config-dgd95"] Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.613585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvwp\" (UniqueName: \"kubernetes.io/projected/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-kube-api-access-vwvwp\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.614451 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-scripts\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.614649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run-ovn\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.614738 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-additional-scripts\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.614875 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-log-ovn\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.614958 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.716909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-scripts\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.717273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run-ovn\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.717640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-additional-scripts\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.718345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-log-ovn\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.718496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.717601 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run-ovn\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.718460 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-log-ovn\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.718238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-additional-scripts\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.718601 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.718797 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-scripts\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.719060 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvwp\" (UniqueName: \"kubernetes.io/projected/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-kube-api-access-vwvwp\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.748387 4687 generic.go:334] "Generic (PLEG): container finished" podID="30cd364a-718d-4924-91ec-c98402368e0c" containerID="084532cf29d593d9392bb6b0b721e6aa46235245effdbbf003056c27ac60cca9" exitCode=0 Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.750655 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvwp\" (UniqueName: \"kubernetes.io/projected/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-kube-api-access-vwvwp\") pod \"ovn-controller-9x5hb-config-dgd95\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.754233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" event={"ID":"30cd364a-718d-4924-91ec-c98402368e0c","Type":"ContainerDied","Data":"084532cf29d593d9392bb6b0b721e6aa46235245effdbbf003056c27ac60cca9"} Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.758939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" event={"ID":"9b2e2820-00f3-4111-96a4-4d378529f9fd","Type":"ContainerStarted","Data":"4c69679bf16aea0a53c02eb7863e0e3507640520d624f4668cb0214d42a97c91"} Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.782968 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" event={"ID":"4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62","Type":"ContainerDied","Data":"981f6a5ccd0bcc9be647ae6c20718232cf3a657735fd56932f0799b4b4b794a2"} Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.783144 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="981f6a5ccd0bcc9be647ae6c20718232cf3a657735fd56932f0799b4b4b794a2" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.782994 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.800567 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-89qs5" event={"ID":"0da6dedc-beb0-4108-839d-f234a2cf3caf","Type":"ContainerStarted","Data":"afc09065b9993d82b70db40071051c5e0060c6efcefde26794a3ad23e21f4e95"} Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.800609 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-89qs5" event={"ID":"0da6dedc-beb0-4108-839d-f234a2cf3caf","Type":"ContainerStarted","Data":"0be654a8ce75a25dca167a3b57c691ea0f96cf181374e27358da6fbca6c13afa"} Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.806575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hm5rd" event={"ID":"b1b1f68f-7bdc-4437-922c-d0abc47c639c","Type":"ContainerDied","Data":"3e30898e0b39483d9051dfb1f67ff164833ae2830b075ae66f897ed3a0017b60"} Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.807414 4687 generic.go:334] "Generic (PLEG): container finished" podID="b1b1f68f-7bdc-4437-922c-d0abc47c639c" containerID="3e30898e0b39483d9051dfb1f67ff164833ae2830b075ae66f897ed3a0017b60" exitCode=0 Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.892784 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-89qs5" podStartSLOduration=2.892767918 podStartE2EDuration="2.892767918s" podCreationTimestamp="2026-03-12 16:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:01.889751354 +0000 UTC m=+1290.853713698" watchObservedRunningTime="2026-03-12 16:24:01.892767918 +0000 UTC m=+1290.856730262" Mar 12 16:24:01 crc kubenswrapper[4687]: I0312 16:24:01.903871 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:02 crc kubenswrapper[4687]: I0312 16:24:02.394692 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9x5hb-config-dgd95"] Mar 12 16:24:02 crc kubenswrapper[4687]: W0312 16:24:02.404548 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b672ee4_3163_4cdb_9f2a_f2ac367b2915.slice/crio-0e86fef02dd4c47968cce7ed48699d77611db942545c078fe4f4ca4ab9d2dc20 WatchSource:0}: Error finding container 0e86fef02dd4c47968cce7ed48699d77611db942545c078fe4f4ca4ab9d2dc20: Status 404 returned error can't find the container with id 0e86fef02dd4c47968cce7ed48699d77611db942545c078fe4f4ca4ab9d2dc20 Mar 12 16:24:02 crc kubenswrapper[4687]: I0312 16:24:02.844172 4687 generic.go:334] "Generic (PLEG): container finished" podID="0da6dedc-beb0-4108-839d-f234a2cf3caf" containerID="afc09065b9993d82b70db40071051c5e0060c6efcefde26794a3ad23e21f4e95" exitCode=0 Mar 12 16:24:02 crc kubenswrapper[4687]: I0312 16:24:02.844618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-89qs5" event={"ID":"0da6dedc-beb0-4108-839d-f234a2cf3caf","Type":"ContainerDied","Data":"afc09065b9993d82b70db40071051c5e0060c6efcefde26794a3ad23e21f4e95"} Mar 12 16:24:02 crc kubenswrapper[4687]: I0312 16:24:02.856458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb-config-dgd95" event={"ID":"1b672ee4-3163-4cdb-9f2a-f2ac367b2915","Type":"ContainerStarted","Data":"0e86fef02dd4c47968cce7ed48699d77611db942545c078fe4f4ca4ab9d2dc20"} Mar 12 16:24:03 crc kubenswrapper[4687]: I0312 16:24:03.869422 4687 generic.go:334] "Generic (PLEG): container finished" podID="9b2e2820-00f3-4111-96a4-4d378529f9fd" containerID="2a394af9b2d5ca8a400127e9212ed0b768cf1c40d2bd3bf9ba1abf09eacbd363" exitCode=0 Mar 12 16:24:03 crc kubenswrapper[4687]: I0312 16:24:03.869519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" event={"ID":"9b2e2820-00f3-4111-96a4-4d378529f9fd","Type":"ContainerDied","Data":"2a394af9b2d5ca8a400127e9212ed0b768cf1c40d2bd3bf9ba1abf09eacbd363"} Mar 12 16:24:03 crc kubenswrapper[4687]: I0312 16:24:03.871431 4687 generic.go:334] "Generic (PLEG): container finished" podID="1b672ee4-3163-4cdb-9f2a-f2ac367b2915" containerID="31d8b51e69cfc98e59ae0b5d16e21902f38466446b0e07db8bbceabf6211ff5b" exitCode=0 Mar 12 16:24:03 crc kubenswrapper[4687]: I0312 16:24:03.871464 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb-config-dgd95" event={"ID":"1b672ee4-3163-4cdb-9f2a-f2ac367b2915","Type":"ContainerDied","Data":"31d8b51e69cfc98e59ae0b5d16e21902f38466446b0e07db8bbceabf6211ff5b"} Mar 12 16:24:05 crc kubenswrapper[4687]: I0312 16:24:05.000227 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:05 crc kubenswrapper[4687]: I0312 16:24:05.003802 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:05 crc kubenswrapper[4687]: I0312 16:24:05.894776 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:06 crc kubenswrapper[4687]: I0312 16:24:06.317061 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9x5hb" Mar 12 16:24:06 crc kubenswrapper[4687]: I0312 16:24:06.975294 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:24:06 crc kubenswrapper[4687]: I0312 16:24:06.982401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97466c9b-724b-4349-8745-8803b025261a-etc-swift\") pod \"swift-storage-0\" (UID: \"97466c9b-724b-4349-8745-8803b025261a\") " pod="openstack/swift-storage-0" Mar 12 16:24:07 crc kubenswrapper[4687]: I0312 16:24:07.022849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.034286 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.035282 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="prometheus" containerID="cri-o://87ded7cfd707791f2f0e8ce611c372b1044929a9f3296091708ee917be327222" gracePeriod=600 Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.035391 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="thanos-sidecar" containerID="cri-o://d52f0097a139c9de43de02718c0d92388285045802fa768d8ae7c15745e24192" gracePeriod=600 Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.035428 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="config-reloader" containerID="cri-o://f46d0e7a828227b4eaf07d482669324d5960c3f33f8fb3e70abf1290c49080e7" gracePeriod=600 Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.259550 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.555155 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.844671 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.878062 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.936899 4687 generic.go:334] "Generic (PLEG): container finished" podID="b24db29f-d8ed-49ff-8f32-612345208003" containerID="d52f0097a139c9de43de02718c0d92388285045802fa768d8ae7c15745e24192" exitCode=0 Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.936940 4687 generic.go:334] "Generic (PLEG): container finished" podID="b24db29f-d8ed-49ff-8f32-612345208003" containerID="f46d0e7a828227b4eaf07d482669324d5960c3f33f8fb3e70abf1290c49080e7" exitCode=0 Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.936948 4687 generic.go:334] "Generic (PLEG): container finished" podID="b24db29f-d8ed-49ff-8f32-612345208003" containerID="87ded7cfd707791f2f0e8ce611c372b1044929a9f3296091708ee917be327222" exitCode=0 Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.936969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerDied","Data":"d52f0097a139c9de43de02718c0d92388285045802fa768d8ae7c15745e24192"} Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.936993 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerDied","Data":"f46d0e7a828227b4eaf07d482669324d5960c3f33f8fb3e70abf1290c49080e7"} Mar 12 16:24:08 crc kubenswrapper[4687]: I0312 16:24:08.937002 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerDied","Data":"87ded7cfd707791f2f0e8ce611c372b1044929a9f3296091708ee917be327222"} Mar 12 16:24:10 crc kubenswrapper[4687]: I0312 16:24:10.002242 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.142:9090/-/ready\": dial tcp 10.217.0.142:9090: connect: connection refused" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.260693 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.262282 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.271264 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-89qs5" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.290704 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.301502 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.372854 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428118 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run\") pod \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428176 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-swiftconf\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428220 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5th4\" (UniqueName: \"kubernetes.io/projected/b1b1f68f-7bdc-4437-922c-d0abc47c639c-kube-api-access-g5th4\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428248 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-scripts\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-log-ovn\") pod \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428305 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da6dedc-beb0-4108-839d-f234a2cf3caf-operator-scripts\") pod \"0da6dedc-beb0-4108-839d-f234a2cf3caf\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428403 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwvwp\" (UniqueName: \"kubernetes.io/projected/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-kube-api-access-vwvwp\") pod \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-scripts\") pod \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428473 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-ring-data-devices\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-combined-ca-bundle\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428616 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-dispersionconf\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-additional-scripts\") pod \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428683 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cd364a-718d-4924-91ec-c98402368e0c-operator-scripts\") pod \"30cd364a-718d-4924-91ec-c98402368e0c\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1b1f68f-7bdc-4437-922c-d0abc47c639c-etc-swift\") pod \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\" (UID: \"b1b1f68f-7bdc-4437-922c-d0abc47c639c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428789 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27tct\" (UniqueName: \"kubernetes.io/projected/0da6dedc-beb0-4108-839d-f234a2cf3caf-kube-api-access-27tct\") pod \"0da6dedc-beb0-4108-839d-f234a2cf3caf\" (UID: \"0da6dedc-beb0-4108-839d-f234a2cf3caf\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428818 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqmn\" (UniqueName: \"kubernetes.io/projected/30cd364a-718d-4924-91ec-c98402368e0c-kube-api-access-bbqmn\") pod \"30cd364a-718d-4924-91ec-c98402368e0c\" (UID: \"30cd364a-718d-4924-91ec-c98402368e0c\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428878 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run-ovn\") pod \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\" (UID: \"1b672ee4-3163-4cdb-9f2a-f2ac367b2915\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.428921 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv7sc\" (UniqueName: \"kubernetes.io/projected/9b2e2820-00f3-4111-96a4-4d378529f9fd-kube-api-access-fv7sc\") pod \"9b2e2820-00f3-4111-96a4-4d378529f9fd\" (UID: \"9b2e2820-00f3-4111-96a4-4d378529f9fd\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.430034 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.430082 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1b672ee4-3163-4cdb-9f2a-f2ac367b2915" (UID: "1b672ee4-3163-4cdb-9f2a-f2ac367b2915"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.430406 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da6dedc-beb0-4108-839d-f234a2cf3caf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0da6dedc-beb0-4108-839d-f234a2cf3caf" (UID: "0da6dedc-beb0-4108-839d-f234a2cf3caf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.430599 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cd364a-718d-4924-91ec-c98402368e0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30cd364a-718d-4924-91ec-c98402368e0c" (UID: "30cd364a-718d-4924-91ec-c98402368e0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.431283 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1b672ee4-3163-4cdb-9f2a-f2ac367b2915" (UID: "1b672ee4-3163-4cdb-9f2a-f2ac367b2915"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.432638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1b672ee4-3163-4cdb-9f2a-f2ac367b2915" (UID: "1b672ee4-3163-4cdb-9f2a-f2ac367b2915"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.434422 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-scripts" (OuterVolumeSpecName: "scripts") pod "1b672ee4-3163-4cdb-9f2a-f2ac367b2915" (UID: "1b672ee4-3163-4cdb-9f2a-f2ac367b2915"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.442342 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run" (OuterVolumeSpecName: "var-run") pod "1b672ee4-3163-4cdb-9f2a-f2ac367b2915" (UID: "1b672ee4-3163-4cdb-9f2a-f2ac367b2915"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.443060 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b1f68f-7bdc-4437-922c-d0abc47c639c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.447731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cd364a-718d-4924-91ec-c98402368e0c-kube-api-access-bbqmn" (OuterVolumeSpecName: "kube-api-access-bbqmn") pod "30cd364a-718d-4924-91ec-c98402368e0c" (UID: "30cd364a-718d-4924-91ec-c98402368e0c"). InnerVolumeSpecName "kube-api-access-bbqmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.447985 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b1f68f-7bdc-4437-922c-d0abc47c639c-kube-api-access-g5th4" (OuterVolumeSpecName: "kube-api-access-g5th4") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "kube-api-access-g5th4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.447960 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-kube-api-access-vwvwp" (OuterVolumeSpecName: "kube-api-access-vwvwp") pod "1b672ee4-3163-4cdb-9f2a-f2ac367b2915" (UID: "1b672ee4-3163-4cdb-9f2a-f2ac367b2915"). InnerVolumeSpecName "kube-api-access-vwvwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.449378 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da6dedc-beb0-4108-839d-f234a2cf3caf-kube-api-access-27tct" (OuterVolumeSpecName: "kube-api-access-27tct") pod "0da6dedc-beb0-4108-839d-f234a2cf3caf" (UID: "0da6dedc-beb0-4108-839d-f234a2cf3caf"). InnerVolumeSpecName "kube-api-access-27tct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.451213 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2e2820-00f3-4111-96a4-4d378529f9fd-kube-api-access-fv7sc" (OuterVolumeSpecName: "kube-api-access-fv7sc") pod "9b2e2820-00f3-4111-96a4-4d378529f9fd" (UID: "9b2e2820-00f3-4111-96a4-4d378529f9fd"). InnerVolumeSpecName "kube-api-access-fv7sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.454893 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.460866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-scripts" (OuterVolumeSpecName: "scripts") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.481039 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.505797 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1b1f68f-7bdc-4437-922c-d0abc47c639c" (UID: "b1b1f68f-7bdc-4437-922c-d0abc47c639c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.530901 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-tls-assets\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.531212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-web-config\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.531746 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8m65\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-kube-api-access-k8m65\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.531962 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.532335 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-0\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.532792 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.534785 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.534976 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b24db29f-d8ed-49ff-8f32-612345208003-config-out\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.535226 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-2\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.536147 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-thanos-prometheus-http-client-file\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.536650 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-config\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.537237 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-1\") pod \"b24db29f-d8ed-49ff-8f32-612345208003\" (UID: \"b24db29f-d8ed-49ff-8f32-612345208003\") " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.534976 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-kube-api-access-k8m65" (OuterVolumeSpecName: "kube-api-access-k8m65") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "kube-api-access-k8m65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.535702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.538185 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24db29f-d8ed-49ff-8f32-612345208003-config-out" (OuterVolumeSpecName: "config-out") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.539975 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.541229 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-config" (OuterVolumeSpecName: "config") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.541258 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.541616 4687 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.541846 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7sc\" (UniqueName: \"kubernetes.io/projected/9b2e2820-00f3-4111-96a4-4d378529f9fd-kube-api-access-fv7sc\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.542016 4687 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.542120 4687 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-run\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549426 4687 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549457 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5th4\" (UniqueName: \"kubernetes.io/projected/b1b1f68f-7bdc-4437-922c-d0abc47c639c-kube-api-access-g5th4\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549475 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549488 4687 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549498 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0da6dedc-beb0-4108-839d-f234a2cf3caf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549506 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwvwp\" (UniqueName: \"kubernetes.io/projected/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-kube-api-access-vwvwp\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549515 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549524 4687 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1b1f68f-7bdc-4437-922c-d0abc47c639c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549532 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549541 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8m65\" (UniqueName: \"kubernetes.io/projected/b24db29f-d8ed-49ff-8f32-612345208003-kube-api-access-k8m65\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549550 4687 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549559 4687 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1b1f68f-7bdc-4437-922c-d0abc47c639c-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549570 4687 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1b672ee4-3163-4cdb-9f2a-f2ac367b2915-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549579 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cd364a-718d-4924-91ec-c98402368e0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549589 4687 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b24db29f-d8ed-49ff-8f32-612345208003-config-out\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549598 4687 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549608 4687 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549617 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1b1f68f-7bdc-4437-922c-d0abc47c639c-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549627 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27tct\" (UniqueName: \"kubernetes.io/projected/0da6dedc-beb0-4108-839d-f234a2cf3caf-kube-api-access-27tct\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549636 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqmn\" (UniqueName: \"kubernetes.io/projected/30cd364a-718d-4924-91ec-c98402368e0c-kube-api-access-bbqmn\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549645 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.549654 4687 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b24db29f-d8ed-49ff-8f32-612345208003-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.560707 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "pvc-e5671d99-42d7-4021-a385-39cfd539da76". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.587615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-web-config" (OuterVolumeSpecName: "web-config") pod "b24db29f-d8ed-49ff-8f32-612345208003" (UID: "b24db29f-d8ed-49ff-8f32-612345208003"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.651328 4687 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b24db29f-d8ed-49ff-8f32-612345208003-web-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.652970 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") on node \"crc\" " Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.681591 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.683125 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e5671d99-42d7-4021-a385-39cfd539da76" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76") on node "crc" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.741181 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 16:24:12 crc kubenswrapper[4687]: W0312 16:24:12.746576 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97466c9b_724b_4349_8745_8803b025261a.slice/crio-d94884b14b02bc58fbb9ae89cf965daf38a8140f9b4ffd5de5365f9f69c9bd96 WatchSource:0}: Error finding container d94884b14b02bc58fbb9ae89cf965daf38a8140f9b4ffd5de5365f9f69c9bd96: Status 404 returned error can't find the container with id d94884b14b02bc58fbb9ae89cf965daf38a8140f9b4ffd5de5365f9f69c9bd96 Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.755152 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.973286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb-config-dgd95" event={"ID":"1b672ee4-3163-4cdb-9f2a-f2ac367b2915","Type":"ContainerDied","Data":"0e86fef02dd4c47968cce7ed48699d77611db942545c078fe4f4ca4ab9d2dc20"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.973313 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-dgd95" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.973334 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e86fef02dd4c47968cce7ed48699d77611db942545c078fe4f4ca4ab9d2dc20" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.975116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hm5rd" event={"ID":"b1b1f68f-7bdc-4437-922c-d0abc47c639c","Type":"ContainerDied","Data":"7d28f9b262988647ffa9c99af0edd1271b8545dd5c1079c89b92ba239b3b28ee"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.975153 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d28f9b262988647ffa9c99af0edd1271b8545dd5c1079c89b92ba239b3b28ee" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.975558 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hm5rd" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.981021 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9v52n" event={"ID":"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc","Type":"ContainerStarted","Data":"d2848347e043a5421edfa736791e6a0eb09327b5d50332ce96140841a35161a8"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.983889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b24db29f-d8ed-49ff-8f32-612345208003","Type":"ContainerDied","Data":"cf8191c1c5560e51450690ffc258d35d50958154bf50c049fa9ca0044ef438f7"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.983932 4687 scope.go:117] "RemoveContainer" containerID="d52f0097a139c9de43de02718c0d92388285045802fa768d8ae7c15745e24192" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.984034 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.985988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" event={"ID":"30cd364a-718d-4924-91ec-c98402368e0c","Type":"ContainerDied","Data":"4a2a28e9840c3747e005167a1379d64bcf5372546be9fa387dda3079818eff73"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.986021 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2a28e9840c3747e005167a1379d64bcf5372546be9fa387dda3079818eff73" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.986082 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f8f1-account-create-update-tp6zx" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.992559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" event={"ID":"9b2e2820-00f3-4111-96a4-4d378529f9fd","Type":"ContainerDied","Data":"4c69679bf16aea0a53c02eb7863e0e3507640520d624f4668cb0214d42a97c91"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.992661 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c69679bf16aea0a53c02eb7863e0e3507640520d624f4668cb0214d42a97c91" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.993956 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555544-wmmm5" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.994025 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"d94884b14b02bc58fbb9ae89cf965daf38a8140f9b4ffd5de5365f9f69c9bd96"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.997339 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-89qs5" event={"ID":"0da6dedc-beb0-4108-839d-f234a2cf3caf","Type":"ContainerDied","Data":"0be654a8ce75a25dca167a3b57c691ea0f96cf181374e27358da6fbca6c13afa"} Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.997385 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be654a8ce75a25dca167a3b57c691ea0f96cf181374e27358da6fbca6c13afa" Mar 12 16:24:12 crc kubenswrapper[4687]: I0312 16:24:12.997585 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-89qs5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.003678 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9v52n" podStartSLOduration=2.93306082 podStartE2EDuration="22.003662086s" podCreationTimestamp="2026-03-12 16:23:51 +0000 UTC" firstStartedPulling="2026-03-12 16:23:53.072626913 +0000 UTC m=+1282.036589247" lastFinishedPulling="2026-03-12 16:24:12.143228169 +0000 UTC m=+1301.107190513" observedRunningTime="2026-03-12 16:24:13.002546425 +0000 UTC m=+1301.966508769" watchObservedRunningTime="2026-03-12 16:24:13.003662086 +0000 UTC m=+1301.967624430" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.076837 4687 scope.go:117] "RemoveContainer" containerID="f46d0e7a828227b4eaf07d482669324d5960c3f33f8fb3e70abf1290c49080e7" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.115149 4687 scope.go:117] "RemoveContainer" containerID="87ded7cfd707791f2f0e8ce611c372b1044929a9f3296091708ee917be327222" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.126598 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.138544 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.155624 4687 scope.go:117] "RemoveContainer" containerID="07482ee2404f472217616b3bb74caeb560f705a9a937e22de589a6e9d3cd4c4a" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.211669 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212119 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b1f68f-7bdc-4437-922c-d0abc47c639c" containerName="swift-ring-rebalance" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212135 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b1f68f-7bdc-4437-922c-d0abc47c639c" containerName="swift-ring-rebalance" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212148 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da6dedc-beb0-4108-839d-f234a2cf3caf" containerName="mariadb-account-create-update" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212156 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da6dedc-beb0-4108-839d-f234a2cf3caf" containerName="mariadb-account-create-update" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212166 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2e2820-00f3-4111-96a4-4d378529f9fd" containerName="oc" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212171 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2e2820-00f3-4111-96a4-4d378529f9fd" containerName="oc" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212186 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="config-reloader" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212191 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="config-reloader" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212200 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b672ee4-3163-4cdb-9f2a-f2ac367b2915" containerName="ovn-config" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212206 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b672ee4-3163-4cdb-9f2a-f2ac367b2915" containerName="ovn-config" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212217 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="prometheus" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212222 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="prometheus" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212242 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="init-config-reloader" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212248 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="init-config-reloader" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212256 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="thanos-sidecar" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212261 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="thanos-sidecar" Mar 12 16:24:13 crc kubenswrapper[4687]: E0312 16:24:13.212268 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cd364a-718d-4924-91ec-c98402368e0c" containerName="mariadb-account-create-update" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212274 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cd364a-718d-4924-91ec-c98402368e0c" containerName="mariadb-account-create-update" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212454 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b1f68f-7bdc-4437-922c-d0abc47c639c" containerName="swift-ring-rebalance" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212471 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="thanos-sidecar" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212515 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2e2820-00f3-4111-96a4-4d378529f9fd" containerName="oc" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212527 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b672ee4-3163-4cdb-9f2a-f2ac367b2915" containerName="ovn-config" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212540 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da6dedc-beb0-4108-839d-f234a2cf3caf" containerName="mariadb-account-create-update" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212577 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="prometheus" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212586 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24db29f-d8ed-49ff-8f32-612345208003" containerName="config-reloader" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.212600 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cd364a-718d-4924-91ec-c98402368e0c" containerName="mariadb-account-create-update" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.215857 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.220723 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.222436 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.223809 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.224041 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.224153 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.224262 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.224483 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.224632 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.224791 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-z5sdp" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.233805 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.371873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.371951 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.371987 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1d3c2a08-d60e-4b86-858d-f5ac038f566e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzxm\" (UniqueName: \"kubernetes.io/projected/1d3c2a08-d60e-4b86-858d-f5ac038f566e-kube-api-access-cgzxm\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372571 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1d3c2a08-d60e-4b86-858d-f5ac038f566e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372635 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372721 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372890 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-config\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.372983 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.373042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.411939 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555538-m77gz"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.422958 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555538-m77gz"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.450940 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9x5hb-config-dgd95"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.463196 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9x5hb-config-dgd95"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.478428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1d3c2a08-d60e-4b86-858d-f5ac038f566e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.478511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgzxm\" (UniqueName: \"kubernetes.io/projected/1d3c2a08-d60e-4b86-858d-f5ac038f566e-kube-api-access-cgzxm\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.478601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.478824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1d3c2a08-d60e-4b86-858d-f5ac038f566e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.478880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.478945 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479151 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-config\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479226 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479267 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479464 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.479632 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.480118 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.481893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.482978 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1d3c2a08-d60e-4b86-858d-f5ac038f566e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.486640 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1d3c2a08-d60e-4b86-858d-f5ac038f566e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.487833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.491899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.492157 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.492199 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0120afcda32ee9517420f0f0356e40661dba3e83466953a4986803f80a521621/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.494102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1d3c2a08-d60e-4b86-858d-f5ac038f566e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.494520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-config\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.494529 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.497062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgzxm\" (UniqueName: \"kubernetes.io/projected/1d3c2a08-d60e-4b86-858d-f5ac038f566e-kube-api-access-cgzxm\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.502877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.503412 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3c2a08-d60e-4b86-858d-f5ac038f566e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.563593 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e5671d99-42d7-4021-a385-39cfd539da76\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5671d99-42d7-4021-a385-39cfd539da76\") pod \"prometheus-metric-storage-0\" (UID: \"1d3c2a08-d60e-4b86-858d-f5ac038f566e\") " pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.598667 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9x5hb-config-crtl5"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.600107 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.609376 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9x5hb-config-crtl5"] Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.610808 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.759818 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b672ee4-3163-4cdb-9f2a-f2ac367b2915" path="/var/lib/kubelet/pods/1b672ee4-3163-4cdb-9f2a-f2ac367b2915/volumes" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.760913 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817df5b4-8432-4fb7-a823-330d30f4bb59" path="/var/lib/kubelet/pods/817df5b4-8432-4fb7-a823-330d30f4bb59/volumes" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.761894 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24db29f-d8ed-49ff-8f32-612345208003" path="/var/lib/kubelet/pods/b24db29f-d8ed-49ff-8f32-612345208003/volumes" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.786080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.786119 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-log-ovn\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.787173 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run-ovn\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.787377 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-scripts\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.787431 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-additional-scripts\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.787579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sddr8\" (UniqueName: \"kubernetes.io/projected/8bde159c-8569-4b10-b3ec-a4857a4317d9-kube-api-access-sddr8\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.839039 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.890784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.890843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-log-ovn\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.890881 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run-ovn\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.890967 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-scripts\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.891016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-additional-scripts\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.891066 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sddr8\" (UniqueName: \"kubernetes.io/projected/8bde159c-8569-4b10-b3ec-a4857a4317d9-kube-api-access-sddr8\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.892243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run-ovn\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.892245 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-log-ovn\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.892275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.892670 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-additional-scripts\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.893402 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-scripts\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:13 crc kubenswrapper[4687]: I0312 16:24:13.938154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sddr8\" (UniqueName: \"kubernetes.io/projected/8bde159c-8569-4b10-b3ec-a4857a4317d9-kube-api-access-sddr8\") pod \"ovn-controller-9x5hb-config-crtl5\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.122145 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.122451 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.237230 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.450756 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.457189 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.460787 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.477954 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:24:14 crc kubenswrapper[4687]: W0312 16:24:14.510857 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3c2a08_d60e_4b86_858d_f5ac038f566e.slice/crio-6c4d33a05099157a6ef1d1f59de6762a9c126a5915dd0c69921649003240a95f WatchSource:0}: Error finding container 6c4d33a05099157a6ef1d1f59de6762a9c126a5915dd0c69921649003240a95f: Status 404 returned error can't find the container with id 6c4d33a05099157a6ef1d1f59de6762a9c126a5915dd0c69921649003240a95f Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.524492 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.615336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqzr\" (UniqueName: \"kubernetes.io/projected/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-kube-api-access-djqzr\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.615661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.615702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.724605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.724649 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djqzr\" (UniqueName: \"kubernetes.io/projected/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-kube-api-access-djqzr\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.724687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.730076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.732201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.743407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djqzr\" (UniqueName: \"kubernetes.io/projected/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-kube-api-access-djqzr\") pod \"mysqld-exporter-0\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " pod="openstack/mysqld-exporter-0" Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.961097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9x5hb-config-crtl5"] Mar 12 16:24:14 crc kubenswrapper[4687]: W0312 16:24:14.969150 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bde159c_8569_4b10_b3ec_a4857a4317d9.slice/crio-cb5dae388121249b18e8143a88ad577386a474bb97d2b00799548cbe403f5c63 WatchSource:0}: Error finding container cb5dae388121249b18e8143a88ad577386a474bb97d2b00799548cbe403f5c63: Status 404 returned error can't find the container with id cb5dae388121249b18e8143a88ad577386a474bb97d2b00799548cbe403f5c63 Mar 12 16:24:14 crc kubenswrapper[4687]: I0312 16:24:14.984042 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 16:24:15 crc kubenswrapper[4687]: I0312 16:24:15.055730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb-config-crtl5" event={"ID":"8bde159c-8569-4b10-b3ec-a4857a4317d9","Type":"ContainerStarted","Data":"cb5dae388121249b18e8143a88ad577386a474bb97d2b00799548cbe403f5c63"} Mar 12 16:24:15 crc kubenswrapper[4687]: I0312 16:24:15.057348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1d3c2a08-d60e-4b86-858d-f5ac038f566e","Type":"ContainerStarted","Data":"6c4d33a05099157a6ef1d1f59de6762a9c126a5915dd0c69921649003240a95f"} Mar 12 16:24:15 crc kubenswrapper[4687]: I0312 16:24:15.077006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"8a40af3adaf2c59d581b06af1efa079c3b6e91d27282732c5871a83e761c6df1"} Mar 12 16:24:15 crc kubenswrapper[4687]: I0312 16:24:15.077050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"e3a9a771c5f5beea05848b7e68cb90340fe68e56a65222b538d9c9aa180037d2"} Mar 12 16:24:15 crc kubenswrapper[4687]: I0312 16:24:15.588682 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:24:16 crc kubenswrapper[4687]: I0312 16:24:16.086322 4687 generic.go:334] "Generic (PLEG): container finished" podID="8bde159c-8569-4b10-b3ec-a4857a4317d9" containerID="a522b6c90f7b5fa65d7452ae3a2f13101a83b45dff291a8f1ead8c7d88ab535c" exitCode=0 Mar 12 16:24:16 crc kubenswrapper[4687]: I0312 16:24:16.087059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb-config-crtl5" event={"ID":"8bde159c-8569-4b10-b3ec-a4857a4317d9","Type":"ContainerDied","Data":"a522b6c90f7b5fa65d7452ae3a2f13101a83b45dff291a8f1ead8c7d88ab535c"} Mar 12 16:24:16 crc kubenswrapper[4687]: I0312 16:24:16.088202 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6","Type":"ContainerStarted","Data":"e04be4784875d78439338854bd5bd5ac355dce05f8a39074f50ff06acf1ea586"} Mar 12 16:24:16 crc kubenswrapper[4687]: I0312 16:24:16.090593 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"c2212c0e5f038d6bba884cdd893767be1328d8d7fb4a9fa6918703dd18b6a536"} Mar 12 16:24:16 crc kubenswrapper[4687]: I0312 16:24:16.090620 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"b55420f1365f26e6d2bad17cbb5fa773dbde01cec2e9beefc58b4d7c607aedbb"} Mar 12 16:24:16 crc kubenswrapper[4687]: I0312 16:24:16.693166 4687 scope.go:117] "RemoveContainer" containerID="80d3001c8aeacbafac09cb9e169c90812443f58c6190dc04a9b61436a60ca061" Mar 12 16:24:18 crc kubenswrapper[4687]: I0312 16:24:18.554637 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 12 16:24:18 crc kubenswrapper[4687]: I0312 16:24:18.843880 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 12 16:24:18 crc kubenswrapper[4687]: I0312 16:24:18.874622 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.130201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1d3c2a08-d60e-4b86-858d-f5ac038f566e","Type":"ContainerStarted","Data":"f4bcfb368981c504252c74bb8eeba5a678bbd63aa4041f0601bbbebb6bc5c237"} Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.136134 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9x5hb-config-crtl5" event={"ID":"8bde159c-8569-4b10-b3ec-a4857a4317d9","Type":"ContainerDied","Data":"cb5dae388121249b18e8143a88ad577386a474bb97d2b00799548cbe403f5c63"} Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.136170 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb5dae388121249b18e8143a88ad577386a474bb97d2b00799548cbe403f5c63" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.335819 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382466 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run\") pod \"8bde159c-8569-4b10-b3ec-a4857a4317d9\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382563 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-additional-scripts\") pod \"8bde159c-8569-4b10-b3ec-a4857a4317d9\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382735 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sddr8\" (UniqueName: \"kubernetes.io/projected/8bde159c-8569-4b10-b3ec-a4857a4317d9-kube-api-access-sddr8\") pod \"8bde159c-8569-4b10-b3ec-a4857a4317d9\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382783 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run-ovn\") pod \"8bde159c-8569-4b10-b3ec-a4857a4317d9\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382882 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-scripts\") pod \"8bde159c-8569-4b10-b3ec-a4857a4317d9\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382947 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-log-ovn\") pod \"8bde159c-8569-4b10-b3ec-a4857a4317d9\" (UID: \"8bde159c-8569-4b10-b3ec-a4857a4317d9\") " Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.382959 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run" (OuterVolumeSpecName: "var-run") pod "8bde159c-8569-4b10-b3ec-a4857a4317d9" (UID: "8bde159c-8569-4b10-b3ec-a4857a4317d9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.383027 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8bde159c-8569-4b10-b3ec-a4857a4317d9" (UID: "8bde159c-8569-4b10-b3ec-a4857a4317d9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.383490 4687 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.383512 4687 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-run\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.383563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8bde159c-8569-4b10-b3ec-a4857a4317d9" (UID: "8bde159c-8569-4b10-b3ec-a4857a4317d9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.383909 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8bde159c-8569-4b10-b3ec-a4857a4317d9" (UID: "8bde159c-8569-4b10-b3ec-a4857a4317d9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.384540 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-scripts" (OuterVolumeSpecName: "scripts") pod "8bde159c-8569-4b10-b3ec-a4857a4317d9" (UID: "8bde159c-8569-4b10-b3ec-a4857a4317d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.401904 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bde159c-8569-4b10-b3ec-a4857a4317d9-kube-api-access-sddr8" (OuterVolumeSpecName: "kube-api-access-sddr8") pod "8bde159c-8569-4b10-b3ec-a4857a4317d9" (UID: "8bde159c-8569-4b10-b3ec-a4857a4317d9"). InnerVolumeSpecName "kube-api-access-sddr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.485550 4687 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.485586 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sddr8\" (UniqueName: \"kubernetes.io/projected/8bde159c-8569-4b10-b3ec-a4857a4317d9-kube-api-access-sddr8\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.485598 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bde159c-8569-4b10-b3ec-a4857a4317d9-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:20 crc kubenswrapper[4687]: I0312 16:24:20.485608 4687 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bde159c-8569-4b10-b3ec-a4857a4317d9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.145466 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6","Type":"ContainerStarted","Data":"bd8c38f6cd0da85a8b4edc6cbbec645ce6d6c58579356d5bdd85efc4d94e3d2d"} Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.150076 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"d9c91d5e8d373b26d41b413fb06f0618fa9cf9ccc0df6d6b967184d1b3c57166"} Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.150104 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"eea089cb0a1ea78965d52d4b889a24e03d208b91680bc076c621b3e9ed5a6aae"} Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.150112 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"e9100b1c9e4f1e06943673f534bd6806dc704084a3543792f656f128c76ade7c"} Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.150120 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"e1b769bc04df3f1cea5c535a3de649152cf538cd51ff8d30faa99caadb3515b4"} Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.150164 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9x5hb-config-crtl5" Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.167486 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.71499517 podStartE2EDuration="7.167466765s" podCreationTimestamp="2026-03-12 16:24:14 +0000 UTC" firstStartedPulling="2026-03-12 16:24:15.568643061 +0000 UTC m=+1304.532605405" lastFinishedPulling="2026-03-12 16:24:20.021114656 +0000 UTC m=+1308.985077000" observedRunningTime="2026-03-12 16:24:21.159960559 +0000 UTC m=+1310.123922923" watchObservedRunningTime="2026-03-12 16:24:21.167466765 +0000 UTC m=+1310.131429109" Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.421036 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9x5hb-config-crtl5"] Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.433941 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9x5hb-config-crtl5"] Mar 12 16:24:21 crc kubenswrapper[4687]: I0312 16:24:21.758649 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bde159c-8569-4b10-b3ec-a4857a4317d9" path="/var/lib/kubelet/pods/8bde159c-8569-4b10-b3ec-a4857a4317d9/volumes" Mar 12 16:24:22 crc kubenswrapper[4687]: I0312 16:24:22.160683 4687 generic.go:334] "Generic (PLEG): container finished" podID="eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" containerID="d2848347e043a5421edfa736791e6a0eb09327b5d50332ce96140841a35161a8" exitCode=0 Mar 12 16:24:22 crc kubenswrapper[4687]: I0312 16:24:22.160769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9v52n" event={"ID":"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc","Type":"ContainerDied","Data":"d2848347e043a5421edfa736791e6a0eb09327b5d50332ce96140841a35161a8"} Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.220692 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"bafd721c663788abd77e82c7bfd63a905c1da802a3d8be24b095220073d9b54c"} Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.221178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"1e7d79a911a0f3fa8d7c901d8a59fb6011eaa000da147320df5d210210dcb236"} Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.221188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"d42214288ff3503655e5022b3bbea5c28f87d0186507d7a6a3626028b0e33d2c"} Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.736278 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9v52n" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.854919 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-combined-ca-bundle\") pod \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.855346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgv8x\" (UniqueName: \"kubernetes.io/projected/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-kube-api-access-fgv8x\") pod \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.856448 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-config-data\") pod \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.856489 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-db-sync-config-data\") pod \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\" (UID: \"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc\") " Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.859298 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-kube-api-access-fgv8x" (OuterVolumeSpecName: "kube-api-access-fgv8x") pod "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" (UID: "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc"). InnerVolumeSpecName "kube-api-access-fgv8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.859775 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" (UID: "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.887307 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" (UID: "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.916885 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-config-data" (OuterVolumeSpecName: "config-data") pod "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" (UID: "eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.959058 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.959104 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.959119 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:23 crc kubenswrapper[4687]: I0312 16:24:23.959131 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgv8x\" (UniqueName: \"kubernetes.io/projected/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc-kube-api-access-fgv8x\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.236720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"72d973c70d91b8aca6f23dbc6a8bd38dcf597a59ef48a2bce0e7deeffd8779a9"} Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.236772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"05860ca1d4d352063db2ea614d8e243f9aecd5512f76ddd351f5d67bc35bc741"} Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.236789 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"d81f3677774d55dcaac3f0b794b23e96ec2ac91abd5742d9d834fa05c41027f6"} Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.236798 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"97466c9b-724b-4349-8745-8803b025261a","Type":"ContainerStarted","Data":"186bf0c1618f563f0fa472a436cf7e81f615144cda2f07fbeb672df3202a7a60"} Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.238726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9v52n" event={"ID":"eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc","Type":"ContainerDied","Data":"742c62cc219c9a00afccafad94457e80c6608ce4ecbbbe0536387dd52c67f368"} Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.238771 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742c62cc219c9a00afccafad94457e80c6608ce4ecbbbe0536387dd52c67f368" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.238777 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9v52n" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.282167 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.576673966 podStartE2EDuration="51.282151488s" podCreationTimestamp="2026-03-12 16:23:33 +0000 UTC" firstStartedPulling="2026-03-12 16:24:12.751089965 +0000 UTC m=+1301.715052309" lastFinishedPulling="2026-03-12 16:24:22.456567487 +0000 UTC m=+1311.420529831" observedRunningTime="2026-03-12 16:24:24.271713074 +0000 UTC m=+1313.235675418" watchObservedRunningTime="2026-03-12 16:24:24.282151488 +0000 UTC m=+1313.246113832" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.608274 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qcw6k"] Mar 12 16:24:24 crc kubenswrapper[4687]: E0312 16:24:24.608966 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" containerName="glance-db-sync" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.608984 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" containerName="glance-db-sync" Mar 12 16:24:24 crc kubenswrapper[4687]: E0312 16:24:24.609010 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bde159c-8569-4b10-b3ec-a4857a4317d9" containerName="ovn-config" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.609016 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bde159c-8569-4b10-b3ec-a4857a4317d9" containerName="ovn-config" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.609200 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bde159c-8569-4b10-b3ec-a4857a4317d9" containerName="ovn-config" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.609224 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" containerName="glance-db-sync" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.610288 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.636943 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qcw6k"] Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.708596 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qcw6k"] Mar 12 16:24:24 crc kubenswrapper[4687]: E0312 16:24:24.709426 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-d8m8w ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" podUID="adcb0fa9-005d-46e3-8b14-cd082d811621" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.738265 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b44xh"] Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.739796 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.741451 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.750628 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b44xh"] Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.782191 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8m8w\" (UniqueName: \"kubernetes.io/projected/adcb0fa9-005d-46e3-8b14-cd082d811621-kube-api-access-d8m8w\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.782417 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.782756 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-config\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.782901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.782973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.884934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885038 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8m8w\" (UniqueName: \"kubernetes.io/projected/adcb0fa9-005d-46e3-8b14-cd082d811621-kube-api-access-d8m8w\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885115 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-config\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885133 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwfvq\" (UniqueName: \"kubernetes.io/projected/e7ca2b54-480e-437e-951f-37487cf288da-kube-api-access-rwfvq\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885180 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885203 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885259 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-config\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885273 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.885309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.886110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.886627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.887337 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.887876 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-config\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.904794 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8m8w\" (UniqueName: \"kubernetes.io/projected/adcb0fa9-005d-46e3-8b14-cd082d811621-kube-api-access-d8m8w\") pod \"dnsmasq-dns-5b946c75cc-qcw6k\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.987092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.987159 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.987978 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.988136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.988195 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.988227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.988819 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.988904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.989244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-config\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.989272 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwfvq\" (UniqueName: \"kubernetes.io/projected/e7ca2b54-480e-437e-951f-37487cf288da-kube-api-access-rwfvq\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:24 crc kubenswrapper[4687]: I0312 16:24:24.990094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-config\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.007091 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwfvq\" (UniqueName: \"kubernetes.io/projected/e7ca2b54-480e-437e-951f-37487cf288da-kube-api-access-rwfvq\") pod \"dnsmasq-dns-74f6bcbc87-b44xh\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.077630 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.246720 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.260744 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.399375 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-dns-svc\") pod \"adcb0fa9-005d-46e3-8b14-cd082d811621\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.399451 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-nb\") pod \"adcb0fa9-005d-46e3-8b14-cd082d811621\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.399537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-config\") pod \"adcb0fa9-005d-46e3-8b14-cd082d811621\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.399616 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8m8w\" (UniqueName: \"kubernetes.io/projected/adcb0fa9-005d-46e3-8b14-cd082d811621-kube-api-access-d8m8w\") pod \"adcb0fa9-005d-46e3-8b14-cd082d811621\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.399676 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-sb\") pod \"adcb0fa9-005d-46e3-8b14-cd082d811621\" (UID: \"adcb0fa9-005d-46e3-8b14-cd082d811621\") " Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.400192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-config" (OuterVolumeSpecName: "config") pod "adcb0fa9-005d-46e3-8b14-cd082d811621" (UID: "adcb0fa9-005d-46e3-8b14-cd082d811621"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.400453 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adcb0fa9-005d-46e3-8b14-cd082d811621" (UID: "adcb0fa9-005d-46e3-8b14-cd082d811621"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.400571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adcb0fa9-005d-46e3-8b14-cd082d811621" (UID: "adcb0fa9-005d-46e3-8b14-cd082d811621"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.400608 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adcb0fa9-005d-46e3-8b14-cd082d811621" (UID: "adcb0fa9-005d-46e3-8b14-cd082d811621"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.401712 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.401726 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.401735 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.401743 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adcb0fa9-005d-46e3-8b14-cd082d811621-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.459860 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adcb0fa9-005d-46e3-8b14-cd082d811621-kube-api-access-d8m8w" (OuterVolumeSpecName: "kube-api-access-d8m8w") pod "adcb0fa9-005d-46e3-8b14-cd082d811621" (UID: "adcb0fa9-005d-46e3-8b14-cd082d811621"). InnerVolumeSpecName "kube-api-access-d8m8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.504215 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8m8w\" (UniqueName: \"kubernetes.io/projected/adcb0fa9-005d-46e3-8b14-cd082d811621-kube-api-access-d8m8w\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:25 crc kubenswrapper[4687]: I0312 16:24:25.643176 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b44xh"] Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.282914 4687 generic.go:334] "Generic (PLEG): container finished" podID="e7ca2b54-480e-437e-951f-37487cf288da" containerID="41f611ff77ae05ae6f437a0e54feb2215f2061a735acb6f97f0a76c3464bf833" exitCode=0 Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.283193 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" event={"ID":"e7ca2b54-480e-437e-951f-37487cf288da","Type":"ContainerDied","Data":"41f611ff77ae05ae6f437a0e54feb2215f2061a735acb6f97f0a76c3464bf833"} Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.283218 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" event={"ID":"e7ca2b54-480e-437e-951f-37487cf288da","Type":"ContainerStarted","Data":"05d8d038af9318ef6f5e32d78cc74917d8f273e601df68e6d7dc41426e0fe0e6"} Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.320574 4687 generic.go:334] "Generic (PLEG): container finished" podID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerID="f4bcfb368981c504252c74bb8eeba5a678bbd63aa4041f0601bbbebb6bc5c237" exitCode=0 Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.320651 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qcw6k" Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.321147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1d3c2a08-d60e-4b86-858d-f5ac038f566e","Type":"ContainerDied","Data":"f4bcfb368981c504252c74bb8eeba5a678bbd63aa4041f0601bbbebb6bc5c237"} Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.425434 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qcw6k"] Mar 12 16:24:26 crc kubenswrapper[4687]: I0312 16:24:26.432946 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qcw6k"] Mar 12 16:24:27 crc kubenswrapper[4687]: I0312 16:24:27.335578 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1d3c2a08-d60e-4b86-858d-f5ac038f566e","Type":"ContainerStarted","Data":"022cb0c36de9fcdcb94461e9cde8fd1ea73c9ee0c20363e14d028df59d210f76"} Mar 12 16:24:27 crc kubenswrapper[4687]: I0312 16:24:27.337728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" event={"ID":"e7ca2b54-480e-437e-951f-37487cf288da","Type":"ContainerStarted","Data":"3c1bffbaaca8c35b89b7a77128e2905c6f88d5962c9843c013c9cb7d8ffc1acb"} Mar 12 16:24:27 crc kubenswrapper[4687]: I0312 16:24:27.339542 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:27 crc kubenswrapper[4687]: I0312 16:24:27.366605 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" podStartSLOduration=3.366589147 podStartE2EDuration="3.366589147s" podCreationTimestamp="2026-03-12 16:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:27.36082322 +0000 UTC m=+1316.324785564" watchObservedRunningTime="2026-03-12 16:24:27.366589147 +0000 UTC m=+1316.330551491" Mar 12 16:24:27 crc kubenswrapper[4687]: I0312 16:24:27.744897 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adcb0fa9-005d-46e3-8b14-cd082d811621" path="/var/lib/kubelet/pods/adcb0fa9-005d-46e3-8b14-cd082d811621/volumes" Mar 12 16:24:28 crc kubenswrapper[4687]: I0312 16:24:28.845645 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.248599 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-p7mnw"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.249872 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.259423 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-p7mnw"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.381399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvqg\" (UniqueName: \"kubernetes.io/projected/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-kube-api-access-2jvqg\") pod \"heat-db-create-p7mnw\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.381469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-operator-scripts\") pod \"heat-db-create-p7mnw\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.396826 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-m94x4"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.398079 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.415272 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m94x4"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.483747 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214419-840c-47c7-ae24-0fa13e511604-operator-scripts\") pod \"cinder-db-create-m94x4\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.483917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jvqg\" (UniqueName: \"kubernetes.io/projected/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-kube-api-access-2jvqg\") pod \"heat-db-create-p7mnw\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.483978 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-operator-scripts\") pod \"heat-db-create-p7mnw\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.484033 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5n85\" (UniqueName: \"kubernetes.io/projected/bc214419-840c-47c7-ae24-0fa13e511604-kube-api-access-m5n85\") pod \"cinder-db-create-m94x4\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.485062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-operator-scripts\") pod \"heat-db-create-p7mnw\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.500650 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-62ec-account-create-update-9jt56"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.501952 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.511662 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.519155 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-62ec-account-create-update-9jt56"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.573612 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jvqg\" (UniqueName: \"kubernetes.io/projected/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-kube-api-access-2jvqg\") pod \"heat-db-create-p7mnw\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.585848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5n85\" (UniqueName: \"kubernetes.io/projected/bc214419-840c-47c7-ae24-0fa13e511604-kube-api-access-m5n85\") pod \"cinder-db-create-m94x4\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.585951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214419-840c-47c7-ae24-0fa13e511604-operator-scripts\") pod \"cinder-db-create-m94x4\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.586056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdp7z\" (UniqueName: \"kubernetes.io/projected/fa45e688-7b23-42a0-af5d-36c3a074344f-kube-api-access-zdp7z\") pod \"heat-62ec-account-create-update-9jt56\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.586089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa45e688-7b23-42a0-af5d-36c3a074344f-operator-scripts\") pod \"heat-62ec-account-create-update-9jt56\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.587207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214419-840c-47c7-ae24-0fa13e511604-operator-scripts\") pod \"cinder-db-create-m94x4\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.621153 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-82n95"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.622448 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.628380 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5n85\" (UniqueName: \"kubernetes.io/projected/bc214419-840c-47c7-ae24-0fa13e511604-kube-api-access-m5n85\") pod \"cinder-db-create-m94x4\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.628798 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg9js" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.628960 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.630573 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.634674 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.651935 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-82n95"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.687485 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m2np\" (UniqueName: \"kubernetes.io/projected/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-kube-api-access-9m2np\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.687523 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-combined-ca-bundle\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.687555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-config-data\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.688096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdp7z\" (UniqueName: \"kubernetes.io/projected/fa45e688-7b23-42a0-af5d-36c3a074344f-kube-api-access-zdp7z\") pod \"heat-62ec-account-create-update-9jt56\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.688189 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa45e688-7b23-42a0-af5d-36c3a074344f-operator-scripts\") pod \"heat-62ec-account-create-update-9jt56\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.689068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa45e688-7b23-42a0-af5d-36c3a074344f-operator-scripts\") pod \"heat-62ec-account-create-update-9jt56\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.694848 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-aca1-account-create-update-nxln9"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.696083 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.701640 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.713438 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-aca1-account-create-update-nxln9"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.736337 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.749928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdp7z\" (UniqueName: \"kubernetes.io/projected/fa45e688-7b23-42a0-af5d-36c3a074344f-kube-api-access-zdp7z\") pod \"heat-62ec-account-create-update-9jt56\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.790562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m2np\" (UniqueName: \"kubernetes.io/projected/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-kube-api-access-9m2np\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.790603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-combined-ca-bundle\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.790684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-config-data\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.790747 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec53b11-d489-4b3d-9ab3-f71837d60140-operator-scripts\") pod \"cinder-aca1-account-create-update-nxln9\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.790963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhlj5\" (UniqueName: \"kubernetes.io/projected/2ec53b11-d489-4b3d-9ab3-f71837d60140-kube-api-access-hhlj5\") pod \"cinder-aca1-account-create-update-nxln9\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.815416 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6dk8w"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.824023 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.868440 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6dk8w"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.868928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-combined-ca-bundle\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.893393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-config-data\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.893424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m2np\" (UniqueName: \"kubernetes.io/projected/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-kube-api-access-9m2np\") pod \"keystone-db-sync-82n95\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.893746 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aab1c2f-2474-4289-a7f9-f95918c43526-operator-scripts\") pod \"barbican-db-create-6dk8w\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.893860 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhlj5\" (UniqueName: \"kubernetes.io/projected/2ec53b11-d489-4b3d-9ab3-f71837d60140-kube-api-access-hhlj5\") pod \"cinder-aca1-account-create-update-nxln9\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.893917 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kmk\" (UniqueName: \"kubernetes.io/projected/3aab1c2f-2474-4289-a7f9-f95918c43526-kube-api-access-w4kmk\") pod \"barbican-db-create-6dk8w\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.894013 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec53b11-d489-4b3d-9ab3-f71837d60140-operator-scripts\") pod \"cinder-aca1-account-create-update-nxln9\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.895435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec53b11-d489-4b3d-9ab3-f71837d60140-operator-scripts\") pod \"cinder-aca1-account-create-update-nxln9\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.934239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhlj5\" (UniqueName: \"kubernetes.io/projected/2ec53b11-d489-4b3d-9ab3-f71837d60140-kube-api-access-hhlj5\") pod \"cinder-aca1-account-create-update-nxln9\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.967170 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fpswn"] Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.969766 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.996595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aab1c2f-2474-4289-a7f9-f95918c43526-operator-scripts\") pod \"barbican-db-create-6dk8w\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.996645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056e0820-c20f-4aae-9019-5215b548730d-operator-scripts\") pod \"neutron-db-create-fpswn\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.996707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4kmk\" (UniqueName: \"kubernetes.io/projected/3aab1c2f-2474-4289-a7f9-f95918c43526-kube-api-access-w4kmk\") pod \"barbican-db-create-6dk8w\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.996734 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xk2z\" (UniqueName: \"kubernetes.io/projected/056e0820-c20f-4aae-9019-5215b548730d-kube-api-access-9xk2z\") pod \"neutron-db-create-fpswn\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:29 crc kubenswrapper[4687]: I0312 16:24:29.997663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aab1c2f-2474-4289-a7f9-f95918c43526-operator-scripts\") pod \"barbican-db-create-6dk8w\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.004990 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-99aa-account-create-update-pqqnd"] Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.007061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.009613 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.031221 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fpswn"] Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.060734 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-99aa-account-create-update-pqqnd"] Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.065239 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.067187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4kmk\" (UniqueName: \"kubernetes.io/projected/3aab1c2f-2474-4289-a7f9-f95918c43526-kube-api-access-w4kmk\") pod \"barbican-db-create-6dk8w\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.095929 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-33fc-account-create-update-fq2hv"] Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.097393 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.098980 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82k9\" (UniqueName: \"kubernetes.io/projected/b2d80001-4027-4204-87ea-74a277bbaefe-kube-api-access-r82k9\") pod \"barbican-99aa-account-create-update-pqqnd\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.099127 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056e0820-c20f-4aae-9019-5215b548730d-operator-scripts\") pod \"neutron-db-create-fpswn\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.099178 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d80001-4027-4204-87ea-74a277bbaefe-operator-scripts\") pod \"barbican-99aa-account-create-update-pqqnd\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.099280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xk2z\" (UniqueName: \"kubernetes.io/projected/056e0820-c20f-4aae-9019-5215b548730d-kube-api-access-9xk2z\") pod \"neutron-db-create-fpswn\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.100636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056e0820-c20f-4aae-9019-5215b548730d-operator-scripts\") pod \"neutron-db-create-fpswn\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.106190 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.137774 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-33fc-account-create-update-fq2hv"] Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.140877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xk2z\" (UniqueName: \"kubernetes.io/projected/056e0820-c20f-4aae-9019-5215b548730d-kube-api-access-9xk2z\") pod \"neutron-db-create-fpswn\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.143921 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.193394 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.200599 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d80001-4027-4204-87ea-74a277bbaefe-operator-scripts\") pod \"barbican-99aa-account-create-update-pqqnd\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.200724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82k9\" (UniqueName: \"kubernetes.io/projected/b2d80001-4027-4204-87ea-74a277bbaefe-kube-api-access-r82k9\") pod \"barbican-99aa-account-create-update-pqqnd\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.202227 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d80001-4027-4204-87ea-74a277bbaefe-operator-scripts\") pod \"barbican-99aa-account-create-update-pqqnd\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.233262 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.236668 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82k9\" (UniqueName: \"kubernetes.io/projected/b2d80001-4027-4204-87ea-74a277bbaefe-kube-api-access-r82k9\") pod \"barbican-99aa-account-create-update-pqqnd\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.252239 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.273912 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.299191 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.303821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwlds\" (UniqueName: \"kubernetes.io/projected/9e99f51e-e7c3-41f5-b23e-4b044485bccf-kube-api-access-cwlds\") pod \"neutron-33fc-account-create-update-fq2hv\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.310926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e99f51e-e7c3-41f5-b23e-4b044485bccf-operator-scripts\") pod \"neutron-33fc-account-create-update-fq2hv\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.401889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1d3c2a08-d60e-4b86-858d-f5ac038f566e","Type":"ContainerStarted","Data":"9e830b766c7f86c5f270e72750191962ef9b63014af0c48617f149a0335661a9"} Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.414859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwlds\" (UniqueName: \"kubernetes.io/projected/9e99f51e-e7c3-41f5-b23e-4b044485bccf-kube-api-access-cwlds\") pod \"neutron-33fc-account-create-update-fq2hv\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.414943 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e99f51e-e7c3-41f5-b23e-4b044485bccf-operator-scripts\") pod \"neutron-33fc-account-create-update-fq2hv\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.415681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e99f51e-e7c3-41f5-b23e-4b044485bccf-operator-scripts\") pod \"neutron-33fc-account-create-update-fq2hv\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.434451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwlds\" (UniqueName: \"kubernetes.io/projected/9e99f51e-e7c3-41f5-b23e-4b044485bccf-kube-api-access-cwlds\") pod \"neutron-33fc-account-create-update-fq2hv\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.634164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.652764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-p7mnw"] Mar 12 16:24:30 crc kubenswrapper[4687]: W0312 16:24:30.676879 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbb64ffe_b4eb_4547_9abc_0a332ebfb74d.slice/crio-98fdc3905eeb237f361aa9f1e7d11a8db87beae7142952a944d160cd6611d7f5 WatchSource:0}: Error finding container 98fdc3905eeb237f361aa9f1e7d11a8db87beae7142952a944d160cd6611d7f5: Status 404 returned error can't find the container with id 98fdc3905eeb237f361aa9f1e7d11a8db87beae7142952a944d160cd6611d7f5 Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.815975 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-62ec-account-create-update-9jt56"] Mar 12 16:24:30 crc kubenswrapper[4687]: I0312 16:24:30.846227 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m94x4"] Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.108210 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-82n95"] Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.412168 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m94x4" event={"ID":"bc214419-840c-47c7-ae24-0fa13e511604","Type":"ContainerStarted","Data":"ad44f0c17e33ffd3791ef0cb799856fec0c78bb723e35b41ab738702748a305c"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.412490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m94x4" event={"ID":"bc214419-840c-47c7-ae24-0fa13e511604","Type":"ContainerStarted","Data":"630627fac2f4dfd17de817876c5c97e3b3fe8687c6c2337576a1dfa96d82a3ef"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.418208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p7mnw" event={"ID":"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d","Type":"ContainerStarted","Data":"6f286c784a6abbac6e3f98dced0e5fd35d18f8d04853341d516aa54c0c3ec433"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.418247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p7mnw" event={"ID":"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d","Type":"ContainerStarted","Data":"98fdc3905eeb237f361aa9f1e7d11a8db87beae7142952a944d160cd6611d7f5"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.424698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-82n95" event={"ID":"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b","Type":"ContainerStarted","Data":"3b573455969f671a8b4d667689ce05addb784af9ced38e61d47185d2f9d3deed"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.428029 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1d3c2a08-d60e-4b86-858d-f5ac038f566e","Type":"ContainerStarted","Data":"4dcd11225d9df7f4d8b23cf5a66528e63251b1524aa077ae891699b8c92979ee"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.437978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-62ec-account-create-update-9jt56" event={"ID":"fa45e688-7b23-42a0-af5d-36c3a074344f","Type":"ContainerStarted","Data":"f6f2e753d78795ac77de8689d087d17dc6cebbb633cabeb277e66e7a2a015846"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.438024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-62ec-account-create-update-9jt56" event={"ID":"fa45e688-7b23-42a0-af5d-36c3a074344f","Type":"ContainerStarted","Data":"94970c94166f38cbd283da4118ee74b3d7728d3004c7597ff524e8f8c71152bf"} Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.447788 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-m94x4" podStartSLOduration=2.447768172 podStartE2EDuration="2.447768172s" podCreationTimestamp="2026-03-12 16:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:31.429420021 +0000 UTC m=+1320.393382365" watchObservedRunningTime="2026-03-12 16:24:31.447768172 +0000 UTC m=+1320.411730526" Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.471701 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.471683014 podStartE2EDuration="18.471683014s" podCreationTimestamp="2026-03-12 16:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:31.467831889 +0000 UTC m=+1320.431794233" watchObservedRunningTime="2026-03-12 16:24:31.471683014 +0000 UTC m=+1320.435645358" Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.485051 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-p7mnw" podStartSLOduration=2.4850318590000002 podStartE2EDuration="2.485031859s" podCreationTimestamp="2026-03-12 16:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:31.481987546 +0000 UTC m=+1320.445949890" watchObservedRunningTime="2026-03-12 16:24:31.485031859 +0000 UTC m=+1320.448994213" Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.507535 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-62ec-account-create-update-9jt56" podStartSLOduration=2.507514332 podStartE2EDuration="2.507514332s" podCreationTimestamp="2026-03-12 16:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:31.498034063 +0000 UTC m=+1320.461996407" watchObservedRunningTime="2026-03-12 16:24:31.507514332 +0000 UTC m=+1320.471476676" Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.561291 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fpswn"] Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.574403 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-99aa-account-create-update-pqqnd"] Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.582770 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-33fc-account-create-update-fq2hv"] Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.591103 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6dk8w"] Mar 12 16:24:31 crc kubenswrapper[4687]: I0312 16:24:31.613610 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-aca1-account-create-update-nxln9"] Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.450779 4687 generic.go:334] "Generic (PLEG): container finished" podID="2ec53b11-d489-4b3d-9ab3-f71837d60140" containerID="626d5c30dbc033cf337b7a7add3de1774d786b6dbb36ee9e870eaf201c7b4a96" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.456420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aca1-account-create-update-nxln9" event={"ID":"2ec53b11-d489-4b3d-9ab3-f71837d60140","Type":"ContainerDied","Data":"626d5c30dbc033cf337b7a7add3de1774d786b6dbb36ee9e870eaf201c7b4a96"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.456470 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aca1-account-create-update-nxln9" event={"ID":"2ec53b11-d489-4b3d-9ab3-f71837d60140","Type":"ContainerStarted","Data":"ba73f16b6d8812721e236fe2cca1f55230cf95a1deaa51d1dc5c6da88ae44636"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.460207 4687 generic.go:334] "Generic (PLEG): container finished" podID="fa45e688-7b23-42a0-af5d-36c3a074344f" containerID="f6f2e753d78795ac77de8689d087d17dc6cebbb633cabeb277e66e7a2a015846" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.460350 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-62ec-account-create-update-9jt56" event={"ID":"fa45e688-7b23-42a0-af5d-36c3a074344f","Type":"ContainerDied","Data":"f6f2e753d78795ac77de8689d087d17dc6cebbb633cabeb277e66e7a2a015846"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.475838 4687 generic.go:334] "Generic (PLEG): container finished" podID="b2d80001-4027-4204-87ea-74a277bbaefe" containerID="f1179a4d14d922ef89c4055b62016f2f0a74d9c67ef31b590a51caf5002c3005" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.475942 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-99aa-account-create-update-pqqnd" event={"ID":"b2d80001-4027-4204-87ea-74a277bbaefe","Type":"ContainerDied","Data":"f1179a4d14d922ef89c4055b62016f2f0a74d9c67ef31b590a51caf5002c3005"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.477072 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-99aa-account-create-update-pqqnd" event={"ID":"b2d80001-4027-4204-87ea-74a277bbaefe","Type":"ContainerStarted","Data":"2553a83d66478c48770e2813e3f7d80ae49087293a168468b2516b47107b78df"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.483600 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc214419-840c-47c7-ae24-0fa13e511604" containerID="ad44f0c17e33ffd3791ef0cb799856fec0c78bb723e35b41ab738702748a305c" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.483667 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m94x4" event={"ID":"bc214419-840c-47c7-ae24-0fa13e511604","Type":"ContainerDied","Data":"ad44f0c17e33ffd3791ef0cb799856fec0c78bb723e35b41ab738702748a305c"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.488861 4687 generic.go:334] "Generic (PLEG): container finished" podID="9e99f51e-e7c3-41f5-b23e-4b044485bccf" containerID="da01c485fd9632cd3eb4b12fffee261dd77fb23a9cb2c986d96c52f771d323c9" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.488933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-33fc-account-create-update-fq2hv" event={"ID":"9e99f51e-e7c3-41f5-b23e-4b044485bccf","Type":"ContainerDied","Data":"da01c485fd9632cd3eb4b12fffee261dd77fb23a9cb2c986d96c52f771d323c9"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.488961 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-33fc-account-create-update-fq2hv" event={"ID":"9e99f51e-e7c3-41f5-b23e-4b044485bccf","Type":"ContainerStarted","Data":"820f7e77fa43146168f4ffaed93e83fd97001966ca07270255517de727dd3f76"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.491922 4687 generic.go:334] "Generic (PLEG): container finished" podID="cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" containerID="6f286c784a6abbac6e3f98dced0e5fd35d18f8d04853341d516aa54c0c3ec433" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.491966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p7mnw" event={"ID":"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d","Type":"ContainerDied","Data":"6f286c784a6abbac6e3f98dced0e5fd35d18f8d04853341d516aa54c0c3ec433"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.494990 4687 generic.go:334] "Generic (PLEG): container finished" podID="056e0820-c20f-4aae-9019-5215b548730d" containerID="2f8ae45f6982ec91c6cc8039791ca6005afc026b46b72d50bc0ad0474f10ece6" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.495034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fpswn" event={"ID":"056e0820-c20f-4aae-9019-5215b548730d","Type":"ContainerDied","Data":"2f8ae45f6982ec91c6cc8039791ca6005afc026b46b72d50bc0ad0474f10ece6"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.495052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fpswn" event={"ID":"056e0820-c20f-4aae-9019-5215b548730d","Type":"ContainerStarted","Data":"9ce1623ae3dc5652202e546215a530de80dbad55dc41f3b4227c1c92af04c34e"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.497992 4687 generic.go:334] "Generic (PLEG): container finished" podID="3aab1c2f-2474-4289-a7f9-f95918c43526" containerID="1a32a2745633e102a2fc533b305c76e40c50a6a5701fa9851044ecaff05ad167" exitCode=0 Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.498435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6dk8w" event={"ID":"3aab1c2f-2474-4289-a7f9-f95918c43526","Type":"ContainerDied","Data":"1a32a2745633e102a2fc533b305c76e40c50a6a5701fa9851044ecaff05ad167"} Mar 12 16:24:32 crc kubenswrapper[4687]: I0312 16:24:32.498459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6dk8w" event={"ID":"3aab1c2f-2474-4289-a7f9-f95918c43526","Type":"ContainerStarted","Data":"2cb76aebdf2922a3324911720653f7bd47649a730c68cf88ba543f9f3ea0f98c"} Mar 12 16:24:33 crc kubenswrapper[4687]: I0312 16:24:33.840022 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:35 crc kubenswrapper[4687]: I0312 16:24:35.079819 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:24:35 crc kubenswrapper[4687]: I0312 16:24:35.152009 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4gdg"] Mar 12 16:24:35 crc kubenswrapper[4687]: I0312 16:24:35.152224 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-x4gdg" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerName="dnsmasq-dns" containerID="cri-o://44ce4690d8d5fd4a7afe339a5d7560e0231fff0e7f3583dac6063657f6f47fcb" gracePeriod=10 Mar 12 16:24:35 crc kubenswrapper[4687]: I0312 16:24:35.538983 4687 generic.go:334] "Generic (PLEG): container finished" podID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerID="44ce4690d8d5fd4a7afe339a5d7560e0231fff0e7f3583dac6063657f6f47fcb" exitCode=0 Mar 12 16:24:35 crc kubenswrapper[4687]: I0312 16:24:35.539216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4gdg" event={"ID":"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff","Type":"ContainerDied","Data":"44ce4690d8d5fd4a7afe339a5d7560e0231fff0e7f3583dac6063657f6f47fcb"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.023001 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.057223 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.086242 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.116664 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.124640 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.132169 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.148321 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.156271 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214419-840c-47c7-ae24-0fa13e511604-operator-scripts\") pod \"bc214419-840c-47c7-ae24-0fa13e511604\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.156410 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xk2z\" (UniqueName: \"kubernetes.io/projected/056e0820-c20f-4aae-9019-5215b548730d-kube-api-access-9xk2z\") pod \"056e0820-c20f-4aae-9019-5215b548730d\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.156480 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5n85\" (UniqueName: \"kubernetes.io/projected/bc214419-840c-47c7-ae24-0fa13e511604-kube-api-access-m5n85\") pod \"bc214419-840c-47c7-ae24-0fa13e511604\" (UID: \"bc214419-840c-47c7-ae24-0fa13e511604\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.156557 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056e0820-c20f-4aae-9019-5215b548730d-operator-scripts\") pod \"056e0820-c20f-4aae-9019-5215b548730d\" (UID: \"056e0820-c20f-4aae-9019-5215b548730d\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.157907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056e0820-c20f-4aae-9019-5215b548730d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "056e0820-c20f-4aae-9019-5215b548730d" (UID: "056e0820-c20f-4aae-9019-5215b548730d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.160221 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc214419-840c-47c7-ae24-0fa13e511604-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc214419-840c-47c7-ae24-0fa13e511604" (UID: "bc214419-840c-47c7-ae24-0fa13e511604"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.174579 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc214419-840c-47c7-ae24-0fa13e511604-kube-api-access-m5n85" (OuterVolumeSpecName: "kube-api-access-m5n85") pod "bc214419-840c-47c7-ae24-0fa13e511604" (UID: "bc214419-840c-47c7-ae24-0fa13e511604"). InnerVolumeSpecName "kube-api-access-m5n85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.181518 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056e0820-c20f-4aae-9019-5215b548730d-kube-api-access-9xk2z" (OuterVolumeSpecName: "kube-api-access-9xk2z") pod "056e0820-c20f-4aae-9019-5215b548730d" (UID: "056e0820-c20f-4aae-9019-5215b548730d"). InnerVolumeSpecName "kube-api-access-9xk2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.245733 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.255169 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.260844 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e99f51e-e7c3-41f5-b23e-4b044485bccf-operator-scripts\") pod \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.260921 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwlds\" (UniqueName: \"kubernetes.io/projected/9e99f51e-e7c3-41f5-b23e-4b044485bccf-kube-api-access-cwlds\") pod \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\" (UID: \"9e99f51e-e7c3-41f5-b23e-4b044485bccf\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.260939 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jvqg\" (UniqueName: \"kubernetes.io/projected/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-kube-api-access-2jvqg\") pod \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa45e688-7b23-42a0-af5d-36c3a074344f-operator-scripts\") pod \"fa45e688-7b23-42a0-af5d-36c3a074344f\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261073 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-operator-scripts\") pod \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\" (UID: \"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261106 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82k9\" (UniqueName: \"kubernetes.io/projected/b2d80001-4027-4204-87ea-74a277bbaefe-kube-api-access-r82k9\") pod \"b2d80001-4027-4204-87ea-74a277bbaefe\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261133 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec53b11-d489-4b3d-9ab3-f71837d60140-operator-scripts\") pod \"2ec53b11-d489-4b3d-9ab3-f71837d60140\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdp7z\" (UniqueName: \"kubernetes.io/projected/fa45e688-7b23-42a0-af5d-36c3a074344f-kube-api-access-zdp7z\") pod \"fa45e688-7b23-42a0-af5d-36c3a074344f\" (UID: \"fa45e688-7b23-42a0-af5d-36c3a074344f\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261213 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d80001-4027-4204-87ea-74a277bbaefe-operator-scripts\") pod \"b2d80001-4027-4204-87ea-74a277bbaefe\" (UID: \"b2d80001-4027-4204-87ea-74a277bbaefe\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.261244 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhlj5\" (UniqueName: \"kubernetes.io/projected/2ec53b11-d489-4b3d-9ab3-f71837d60140-kube-api-access-hhlj5\") pod \"2ec53b11-d489-4b3d-9ab3-f71837d60140\" (UID: \"2ec53b11-d489-4b3d-9ab3-f71837d60140\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.262501 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa45e688-7b23-42a0-af5d-36c3a074344f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa45e688-7b23-42a0-af5d-36c3a074344f" (UID: "fa45e688-7b23-42a0-af5d-36c3a074344f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.262946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" (UID: "cbb64ffe-b4eb-4547-9abc-0a332ebfb74d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.262994 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec53b11-d489-4b3d-9ab3-f71837d60140-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ec53b11-d489-4b3d-9ab3-f71837d60140" (UID: "2ec53b11-d489-4b3d-9ab3-f71837d60140"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.263189 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e99f51e-e7c3-41f5-b23e-4b044485bccf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e99f51e-e7c3-41f5-b23e-4b044485bccf" (UID: "9e99f51e-e7c3-41f5-b23e-4b044485bccf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.263190 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2d80001-4027-4204-87ea-74a277bbaefe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2d80001-4027-4204-87ea-74a277bbaefe" (UID: "b2d80001-4027-4204-87ea-74a277bbaefe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264120 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056e0820-c20f-4aae-9019-5215b548730d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264151 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d80001-4027-4204-87ea-74a277bbaefe-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264160 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e99f51e-e7c3-41f5-b23e-4b044485bccf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264170 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc214419-840c-47c7-ae24-0fa13e511604-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264179 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa45e688-7b23-42a0-af5d-36c3a074344f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264188 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xk2z\" (UniqueName: \"kubernetes.io/projected/056e0820-c20f-4aae-9019-5215b548730d-kube-api-access-9xk2z\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264213 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264222 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5n85\" (UniqueName: \"kubernetes.io/projected/bc214419-840c-47c7-ae24-0fa13e511604-kube-api-access-m5n85\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264231 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ec53b11-d489-4b3d-9ab3-f71837d60140-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.264934 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa45e688-7b23-42a0-af5d-36c3a074344f-kube-api-access-zdp7z" (OuterVolumeSpecName: "kube-api-access-zdp7z") pod "fa45e688-7b23-42a0-af5d-36c3a074344f" (UID: "fa45e688-7b23-42a0-af5d-36c3a074344f"). InnerVolumeSpecName "kube-api-access-zdp7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.267090 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec53b11-d489-4b3d-9ab3-f71837d60140-kube-api-access-hhlj5" (OuterVolumeSpecName: "kube-api-access-hhlj5") pod "2ec53b11-d489-4b3d-9ab3-f71837d60140" (UID: "2ec53b11-d489-4b3d-9ab3-f71837d60140"). InnerVolumeSpecName "kube-api-access-hhlj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.267782 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-kube-api-access-2jvqg" (OuterVolumeSpecName: "kube-api-access-2jvqg") pod "cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" (UID: "cbb64ffe-b4eb-4547-9abc-0a332ebfb74d"). InnerVolumeSpecName "kube-api-access-2jvqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.270250 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d80001-4027-4204-87ea-74a277bbaefe-kube-api-access-r82k9" (OuterVolumeSpecName: "kube-api-access-r82k9") pod "b2d80001-4027-4204-87ea-74a277bbaefe" (UID: "b2d80001-4027-4204-87ea-74a277bbaefe"). InnerVolumeSpecName "kube-api-access-r82k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.271684 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e99f51e-e7c3-41f5-b23e-4b044485bccf-kube-api-access-cwlds" (OuterVolumeSpecName: "kube-api-access-cwlds") pod "9e99f51e-e7c3-41f5-b23e-4b044485bccf" (UID: "9e99f51e-e7c3-41f5-b23e-4b044485bccf"). InnerVolumeSpecName "kube-api-access-cwlds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-sb\") pod \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365350 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjs67\" (UniqueName: \"kubernetes.io/projected/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-kube-api-access-gjs67\") pod \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365418 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-dns-svc\") pod \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365618 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-config\") pod \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365706 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4kmk\" (UniqueName: \"kubernetes.io/projected/3aab1c2f-2474-4289-a7f9-f95918c43526-kube-api-access-w4kmk\") pod \"3aab1c2f-2474-4289-a7f9-f95918c43526\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365749 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-nb\") pod \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\" (UID: \"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.365778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aab1c2f-2474-4289-a7f9-f95918c43526-operator-scripts\") pod \"3aab1c2f-2474-4289-a7f9-f95918c43526\" (UID: \"3aab1c2f-2474-4289-a7f9-f95918c43526\") " Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.366318 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhlj5\" (UniqueName: \"kubernetes.io/projected/2ec53b11-d489-4b3d-9ab3-f71837d60140-kube-api-access-hhlj5\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.366336 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwlds\" (UniqueName: \"kubernetes.io/projected/9e99f51e-e7c3-41f5-b23e-4b044485bccf-kube-api-access-cwlds\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.366346 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jvqg\" (UniqueName: \"kubernetes.io/projected/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d-kube-api-access-2jvqg\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.366355 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r82k9\" (UniqueName: \"kubernetes.io/projected/b2d80001-4027-4204-87ea-74a277bbaefe-kube-api-access-r82k9\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.366382 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdp7z\" (UniqueName: \"kubernetes.io/projected/fa45e688-7b23-42a0-af5d-36c3a074344f-kube-api-access-zdp7z\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.366881 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aab1c2f-2474-4289-a7f9-f95918c43526-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aab1c2f-2474-4289-a7f9-f95918c43526" (UID: "3aab1c2f-2474-4289-a7f9-f95918c43526"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.370216 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-kube-api-access-gjs67" (OuterVolumeSpecName: "kube-api-access-gjs67") pod "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" (UID: "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff"). InnerVolumeSpecName "kube-api-access-gjs67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.371563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aab1c2f-2474-4289-a7f9-f95918c43526-kube-api-access-w4kmk" (OuterVolumeSpecName: "kube-api-access-w4kmk") pod "3aab1c2f-2474-4289-a7f9-f95918c43526" (UID: "3aab1c2f-2474-4289-a7f9-f95918c43526"). InnerVolumeSpecName "kube-api-access-w4kmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.412289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" (UID: "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.436066 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-config" (OuterVolumeSpecName: "config") pod "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" (UID: "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.438552 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" (UID: "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.443312 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" (UID: "25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468454 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468479 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aab1c2f-2474-4289-a7f9-f95918c43526-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468488 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468496 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjs67\" (UniqueName: \"kubernetes.io/projected/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-kube-api-access-gjs67\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468507 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468516 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.468524 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4kmk\" (UniqueName: \"kubernetes.io/projected/3aab1c2f-2474-4289-a7f9-f95918c43526-kube-api-access-w4kmk\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.568914 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-82n95" event={"ID":"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b","Type":"ContainerStarted","Data":"19f85534d613ff5e57b0fed25c4e2a8f3a3e544cad06c4bf656a2a67ed95cdb2"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.571082 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6dk8w" event={"ID":"3aab1c2f-2474-4289-a7f9-f95918c43526","Type":"ContainerDied","Data":"2cb76aebdf2922a3324911720653f7bd47649a730c68cf88ba543f9f3ea0f98c"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.571159 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb76aebdf2922a3324911720653f7bd47649a730c68cf88ba543f9f3ea0f98c" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.571279 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6dk8w" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.573784 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m94x4" event={"ID":"bc214419-840c-47c7-ae24-0fa13e511604","Type":"ContainerDied","Data":"630627fac2f4dfd17de817876c5c97e3b3fe8687c6c2337576a1dfa96d82a3ef"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.573812 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630627fac2f4dfd17de817876c5c97e3b3fe8687c6c2337576a1dfa96d82a3ef" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.573866 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m94x4" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.583727 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-33fc-account-create-update-fq2hv" event={"ID":"9e99f51e-e7c3-41f5-b23e-4b044485bccf","Type":"ContainerDied","Data":"820f7e77fa43146168f4ffaed93e83fd97001966ca07270255517de727dd3f76"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.583945 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820f7e77fa43146168f4ffaed93e83fd97001966ca07270255517de727dd3f76" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.583763 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-33fc-account-create-update-fq2hv" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.584922 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fpswn" event={"ID":"056e0820-c20f-4aae-9019-5215b548730d","Type":"ContainerDied","Data":"9ce1623ae3dc5652202e546215a530de80dbad55dc41f3b4227c1c92af04c34e"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.584958 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ce1623ae3dc5652202e546215a530de80dbad55dc41f3b4227c1c92af04c34e" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.584993 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fpswn" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.586378 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-aca1-account-create-update-nxln9" event={"ID":"2ec53b11-d489-4b3d-9ab3-f71837d60140","Type":"ContainerDied","Data":"ba73f16b6d8812721e236fe2cca1f55230cf95a1deaa51d1dc5c6da88ae44636"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.586451 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba73f16b6d8812721e236fe2cca1f55230cf95a1deaa51d1dc5c6da88ae44636" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.586505 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-aca1-account-create-update-nxln9" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.600201 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-82n95" podStartSLOduration=2.8888073050000003 podStartE2EDuration="8.60018098s" podCreationTimestamp="2026-03-12 16:24:29 +0000 UTC" firstStartedPulling="2026-03-12 16:24:31.140831687 +0000 UTC m=+1320.104794031" lastFinishedPulling="2026-03-12 16:24:36.852205362 +0000 UTC m=+1325.816167706" observedRunningTime="2026-03-12 16:24:37.596808238 +0000 UTC m=+1326.560770582" watchObservedRunningTime="2026-03-12 16:24:37.60018098 +0000 UTC m=+1326.564143324" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.600552 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-62ec-account-create-update-9jt56" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.600559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-62ec-account-create-update-9jt56" event={"ID":"fa45e688-7b23-42a0-af5d-36c3a074344f","Type":"ContainerDied","Data":"94970c94166f38cbd283da4118ee74b3d7728d3004c7597ff524e8f8c71152bf"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.600611 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94970c94166f38cbd283da4118ee74b3d7728d3004c7597ff524e8f8c71152bf" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.605632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-99aa-account-create-update-pqqnd" event={"ID":"b2d80001-4027-4204-87ea-74a277bbaefe","Type":"ContainerDied","Data":"2553a83d66478c48770e2813e3f7d80ae49087293a168468b2516b47107b78df"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.605662 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2553a83d66478c48770e2813e3f7d80ae49087293a168468b2516b47107b78df" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.605678 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-99aa-account-create-update-pqqnd" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.606863 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-p7mnw" event={"ID":"cbb64ffe-b4eb-4547-9abc-0a332ebfb74d","Type":"ContainerDied","Data":"98fdc3905eeb237f361aa9f1e7d11a8db87beae7142952a944d160cd6611d7f5"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.606899 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98fdc3905eeb237f361aa9f1e7d11a8db87beae7142952a944d160cd6611d7f5" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.606952 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-p7mnw" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.609491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-x4gdg" event={"ID":"25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff","Type":"ContainerDied","Data":"77f28074beb1d8f1836e9ec6616b4de08eb8a61025239bd32dfbbcd32f3248ea"} Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.609564 4687 scope.go:117] "RemoveContainer" containerID="44ce4690d8d5fd4a7afe339a5d7560e0231fff0e7f3583dac6063657f6f47fcb" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.609620 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-x4gdg" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.659817 4687 scope.go:117] "RemoveContainer" containerID="bd090d29e2c1fe9ea63ba12382a0aba8139e55f793ca5ae995c80ad5ce241cec" Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.670831 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4gdg"] Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.683686 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-x4gdg"] Mar 12 16:24:37 crc kubenswrapper[4687]: I0312 16:24:37.745840 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" path="/var/lib/kubelet/pods/25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff/volumes" Mar 12 16:24:40 crc kubenswrapper[4687]: I0312 16:24:40.642073 4687 generic.go:334] "Generic (PLEG): container finished" podID="afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" containerID="19f85534d613ff5e57b0fed25c4e2a8f3a3e544cad06c4bf656a2a67ed95cdb2" exitCode=0 Mar 12 16:24:40 crc kubenswrapper[4687]: I0312 16:24:40.642166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-82n95" event={"ID":"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b","Type":"ContainerDied","Data":"19f85534d613ff5e57b0fed25c4e2a8f3a3e544cad06c4bf656a2a67ed95cdb2"} Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.137873 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.270060 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-config-data\") pod \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.270188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-combined-ca-bundle\") pod \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.270351 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m2np\" (UniqueName: \"kubernetes.io/projected/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-kube-api-access-9m2np\") pod \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\" (UID: \"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b\") " Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.283581 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-kube-api-access-9m2np" (OuterVolumeSpecName: "kube-api-access-9m2np") pod "afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" (UID: "afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b"). InnerVolumeSpecName "kube-api-access-9m2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.295922 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" (UID: "afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.330992 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-config-data" (OuterVolumeSpecName: "config-data") pod "afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" (UID: "afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.372557 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.372605 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m2np\" (UniqueName: \"kubernetes.io/projected/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-kube-api-access-9m2np\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.372622 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.662960 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-82n95" event={"ID":"afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b","Type":"ContainerDied","Data":"3b573455969f671a8b4d667689ce05addb784af9ced38e61d47185d2f9d3deed"} Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.663002 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b573455969f671a8b4d667689ce05addb784af9ced38e61d47185d2f9d3deed" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.663025 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-82n95" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.977643 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-49nzz"] Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978144 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056e0820-c20f-4aae-9019-5215b548730d" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978164 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="056e0820-c20f-4aae-9019-5215b548730d" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978182 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978189 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978214 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" containerName="keystone-db-sync" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978221 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" containerName="keystone-db-sync" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978230 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d80001-4027-4204-87ea-74a277bbaefe" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978238 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d80001-4027-4204-87ea-74a277bbaefe" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978260 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa45e688-7b23-42a0-af5d-36c3a074344f" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978268 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa45e688-7b23-42a0-af5d-36c3a074344f" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978276 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc214419-840c-47c7-ae24-0fa13e511604" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978284 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc214419-840c-47c7-ae24-0fa13e511604" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978293 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e99f51e-e7c3-41f5-b23e-4b044485bccf" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978300 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e99f51e-e7c3-41f5-b23e-4b044485bccf" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978315 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec53b11-d489-4b3d-9ab3-f71837d60140" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978323 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec53b11-d489-4b3d-9ab3-f71837d60140" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978331 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerName="init" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978338 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerName="init" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978352 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aab1c2f-2474-4289-a7f9-f95918c43526" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978376 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aab1c2f-2474-4289-a7f9-f95918c43526" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: E0312 16:24:42.978402 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerName="dnsmasq-dns" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978409 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerName="dnsmasq-dns" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978622 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc214419-840c-47c7-ae24-0fa13e511604" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978638 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa45e688-7b23-42a0-af5d-36c3a074344f" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978649 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec53b11-d489-4b3d-9ab3-f71837d60140" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978664 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978675 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" containerName="keystone-db-sync" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978690 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="056e0820-c20f-4aae-9019-5215b548730d" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978700 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d80001-4027-4204-87ea-74a277bbaefe" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978714 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="25db9ca0-dfb9-4d2e-a89f-4acb5fbb34ff" containerName="dnsmasq-dns" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978724 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aab1c2f-2474-4289-a7f9-f95918c43526" containerName="mariadb-database-create" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.978739 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e99f51e-e7c3-41f5-b23e-4b044485bccf" containerName="mariadb-account-create-update" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.984331 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.998074 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g7nzf"] Mar 12 16:24:42 crc kubenswrapper[4687]: I0312 16:24:42.999786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.003964 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.004656 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.004862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.005375 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg9js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.005524 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.033920 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-49nzz"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.047844 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g7nzf"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086021 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-fernet-keys\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086293 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086339 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-credential-keys\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086413 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086546 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-config-data\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086625 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-config\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086721 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-combined-ca-bundle\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086781 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf477\" (UniqueName: \"kubernetes.io/projected/f4475047-ada2-4639-a0f1-2278c9518e5b-kube-api-access-tf477\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-scripts\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.086863 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45ps\" (UniqueName: \"kubernetes.io/projected/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-kube-api-access-f45ps\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.168414 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zd9d7"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.171222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.182090 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.182755 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9lspd" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.189856 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-fernet-keys\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.189898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.189932 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-credential-keys\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.189975 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-config-data\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-config\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-combined-ca-bundle\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-scripts\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf477\" (UniqueName: \"kubernetes.io/projected/f4475047-ada2-4639-a0f1-2278c9518e5b-kube-api-access-tf477\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190281 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f45ps\" (UniqueName: \"kubernetes.io/projected/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-kube-api-access-f45ps\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.190337 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.193003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.199007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.198995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.202032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-config-data\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.202469 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zd9d7"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.203224 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-credential-keys\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.206259 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-combined-ca-bundle\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.210933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-fernet-keys\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.212040 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-scripts\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.214866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-config\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.222531 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.223436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f45ps\" (UniqueName: \"kubernetes.io/projected/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-kube-api-access-f45ps\") pod \"keystone-bootstrap-g7nzf\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.236558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf477\" (UniqueName: \"kubernetes.io/projected/f4475047-ada2-4639-a0f1-2278c9518e5b-kube-api-access-tf477\") pod \"dnsmasq-dns-847c4cc679-49nzz\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.297230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-combined-ca-bundle\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.297561 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdghq\" (UniqueName: \"kubernetes.io/projected/9846a398-b104-418e-ace9-6eb022ddacbb-kube-api-access-tdghq\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.297750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-config-data\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.301157 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.323868 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9mcqx"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.325404 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.341077 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9mcqx"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.341233 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.344269 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-868pz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.344547 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.365504 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.399669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-config-data\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.399990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-combined-ca-bundle\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.400120 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdghq\" (UniqueName: \"kubernetes.io/projected/9846a398-b104-418e-ace9-6eb022ddacbb-kube-api-access-tdghq\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.406633 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-config-data\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.434460 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jqpm6"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.436068 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.437235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdghq\" (UniqueName: \"kubernetes.io/projected/9846a398-b104-418e-ace9-6eb022ddacbb-kube-api-access-tdghq\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.443710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-combined-ca-bundle\") pod \"heat-db-sync-zd9d7\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.444083 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6zb78" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.444508 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.444724 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.461891 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bqj4x"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.463575 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.469246 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.469457 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hsmbz" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.489406 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bqj4x"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.498609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zd9d7" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505477 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-combined-ca-bundle\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505560 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-config-data\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505627 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dznbs\" (UniqueName: \"kubernetes.io/projected/de14ef1d-d5a7-490a-a522-8d4cf39989b4-kube-api-access-dznbs\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505658 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-scripts\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de14ef1d-d5a7-490a-a522-8d4cf39989b4-etc-machine-id\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-combined-ca-bundle\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbgm\" (UniqueName: \"kubernetes.io/projected/eda73682-741c-41fd-80be-97995f73b1e1-kube-api-access-jhbgm\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505913 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-db-sync-config-data\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.505964 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-config\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.512191 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jqpm6"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.545794 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5ptv6"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.554588 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.575384 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.575601 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.575720 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jx2x2" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhbgm\" (UniqueName: \"kubernetes.io/projected/eda73682-741c-41fd-80be-97995f73b1e1-kube-api-access-jhbgm\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618808 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5649bea8-d269-4e13-bcf9-8e58f4cdb132-logs\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618843 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-combined-ca-bundle\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618870 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-db-sync-config-data\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-config\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618940 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-config-data\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.618995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-combined-ca-bundle\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619036 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-config-data\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619076 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9zl\" (UniqueName: \"kubernetes.io/projected/5649bea8-d269-4e13-bcf9-8e58f4cdb132-kube-api-access-fb9zl\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-db-sync-config-data\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dznbs\" (UniqueName: \"kubernetes.io/projected/de14ef1d-d5a7-490a-a522-8d4cf39989b4-kube-api-access-dznbs\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619159 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-scripts\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-combined-ca-bundle\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619231 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-scripts\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de14ef1d-d5a7-490a-a522-8d4cf39989b4-etc-machine-id\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619310 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-combined-ca-bundle\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.619340 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkwp\" (UniqueName: \"kubernetes.io/projected/8483f386-39e7-45eb-bd05-02dcdce7677f-kube-api-access-rvkwp\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.629292 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-db-sync-config-data\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.630507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-config\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.630581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de14ef1d-d5a7-490a-a522-8d4cf39989b4-etc-machine-id\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.632519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-scripts\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.652773 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-combined-ca-bundle\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.653073 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-config-data\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.653145 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5ptv6"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.653846 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-combined-ca-bundle\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.661988 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhbgm\" (UniqueName: \"kubernetes.io/projected/eda73682-741c-41fd-80be-97995f73b1e1-kube-api-access-jhbgm\") pod \"neutron-db-sync-jqpm6\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.683783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dznbs\" (UniqueName: \"kubernetes.io/projected/de14ef1d-d5a7-490a-a522-8d4cf39989b4-kube-api-access-dznbs\") pod \"cinder-db-sync-9mcqx\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9zl\" (UniqueName: \"kubernetes.io/projected/5649bea8-d269-4e13-bcf9-8e58f4cdb132-kube-api-access-fb9zl\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-db-sync-config-data\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-combined-ca-bundle\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-scripts\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkwp\" (UniqueName: \"kubernetes.io/projected/8483f386-39e7-45eb-bd05-02dcdce7677f-kube-api-access-rvkwp\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722312 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5649bea8-d269-4e13-bcf9-8e58f4cdb132-logs\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722333 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-combined-ca-bundle\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.722737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-config-data\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.726924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5649bea8-d269-4e13-bcf9-8e58f4cdb132-logs\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.740168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-scripts\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.740430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-config-data\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.740534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-combined-ca-bundle\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.740837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-db-sync-config-data\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.741092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-combined-ca-bundle\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.743904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9zl\" (UniqueName: \"kubernetes.io/projected/5649bea8-d269-4e13-bcf9-8e58f4cdb132-kube-api-access-fb9zl\") pod \"placement-db-sync-5ptv6\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.749026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkwp\" (UniqueName: \"kubernetes.io/projected/8483f386-39e7-45eb-bd05-02dcdce7677f-kube-api-access-rvkwp\") pod \"barbican-db-sync-bqj4x\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.785240 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-49nzz"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.785272 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lp6js"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.787529 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.792083 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lp6js"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.809838 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.813185 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.815244 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.816786 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.841397 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.850076 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.858688 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.869568 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.869581 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.921856 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-config\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931408 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-config-data\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931486 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-log-httpd\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7d6\" (UniqueName: \"kubernetes.io/projected/ef7e01d7-21f4-49be-8700-994e45280f37-kube-api-access-7k7d6\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931532 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931709 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvvh\" (UniqueName: \"kubernetes.io/projected/38be6432-e14f-401f-bb52-f0266cffb420-kube-api-access-bcvvh\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931742 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-scripts\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.931861 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-run-httpd\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.932664 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:43 crc kubenswrapper[4687]: I0312 16:24:43.940812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5ptv6" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035502 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035747 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcvvh\" (UniqueName: \"kubernetes.io/projected/38be6432-e14f-401f-bb52-f0266cffb420-kube-api-access-bcvvh\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035829 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035856 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-scripts\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-run-httpd\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035950 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.035985 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-config\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.036012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-config-data\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.036042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-log-httpd\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.036056 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7d6\" (UniqueName: \"kubernetes.io/projected/ef7e01d7-21f4-49be-8700-994e45280f37-kube-api-access-7k7d6\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.036077 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.036104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.036919 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.037076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.037462 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-config\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.037671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-run-httpd\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.038062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-log-httpd\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.038499 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.038511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.050815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.052027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.056112 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-scripts\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.059639 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-config-data\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.074240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcvvh\" (UniqueName: \"kubernetes.io/projected/38be6432-e14f-401f-bb52-f0266cffb420-kube-api-access-bcvvh\") pod \"dnsmasq-dns-785d8bcb8c-lp6js\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.087176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7d6\" (UniqueName: \"kubernetes.io/projected/ef7e01d7-21f4-49be-8700-994e45280f37-kube-api-access-7k7d6\") pod \"ceilometer-0\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.123769 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.123827 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.130297 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.133505 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.142242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.144650 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.150005 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gln9t" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.150210 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.150483 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.155199 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.222188 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246146 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246252 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246297 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm5x\" (UniqueName: \"kubernetes.io/projected/2f2a199d-f910-4280-8a19-b1b454ce4f05-kube-api-access-hhm5x\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246335 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246379 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-logs\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.246401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.287654 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.290202 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.293130 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.321574 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354755 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm5x\" (UniqueName: \"kubernetes.io/projected/2f2a199d-f910-4280-8a19-b1b454ce4f05-kube-api-access-hhm5x\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354838 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354861 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-logs\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.354926 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.360486 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.361919 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-logs\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.370109 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.370147 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b302cf2d10698cf3ec9c6d22e773973781febffa91dff6b5e0b4b9a361287f8c/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.377223 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.382186 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.392632 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm5x\" (UniqueName: \"kubernetes.io/projected/2f2a199d-f910-4280-8a19-b1b454ce4f05-kube-api-access-hhm5x\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.394398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.399168 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g7nzf"] Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.399714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.410663 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-49nzz"] Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.419349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.458675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.458978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.459062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.459129 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnn29\" (UniqueName: \"kubernetes.io/projected/f4158411-09d5-4a5a-8032-20e25b26d36e-kube-api-access-tnn29\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.459314 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.459423 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-logs\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.459513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.459578 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.519691 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.564998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.565131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.565156 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.566086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnn29\" (UniqueName: \"kubernetes.io/projected/f4158411-09d5-4a5a-8032-20e25b26d36e-kube-api-access-tnn29\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.566163 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.566187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-logs\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.566171 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.566205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.566292 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.570847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.571812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-logs\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.578427 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.581371 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.586218 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.586279 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac1099220b348776e40356de6ace90133c51ecbf0c7d7d5992bf76d1ea170c4e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.586426 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.603592 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnn29\" (UniqueName: \"kubernetes.io/projected/f4158411-09d5-4a5a-8032-20e25b26d36e-kube-api-access-tnn29\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.703371 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zd9d7"] Mar 12 16:24:44 crc kubenswrapper[4687]: W0312 16:24:44.718144 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9846a398_b104_418e_ace9_6eb022ddacbb.slice/crio-a2ea8af0b437ccee7ebcbd58e37909626980ec5cf364241c9de4aa41010091c1 WatchSource:0}: Error finding container a2ea8af0b437ccee7ebcbd58e37909626980ec5cf364241c9de4aa41010091c1: Status 404 returned error can't find the container with id a2ea8af0b437ccee7ebcbd58e37909626980ec5cf364241c9de4aa41010091c1 Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.725718 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" event={"ID":"f4475047-ada2-4639-a0f1-2278c9518e5b","Type":"ContainerStarted","Data":"5643cee1b92d014b85db3999f196194c0552977c9496e05558e1fff328ab7de5"} Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.728573 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g7nzf" event={"ID":"45691b3d-ae9c-4c7d-b053-76db72b4ca3d","Type":"ContainerStarted","Data":"0e5432f5506d89869be9a2826d4a4270e68d60fb4854557cfeb432c4b88d65d7"} Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.740613 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.756011 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.756783 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:24:44 crc kubenswrapper[4687]: I0312 16:24:44.856303 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jqpm6"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.058486 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.098122 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bqj4x"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.131905 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9mcqx"] Mar 12 16:24:45 crc kubenswrapper[4687]: W0312 16:24:45.150726 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8483f386_39e7_45eb_bd05_02dcdce7677f.slice/crio-13f9e9db5295b0c52bc8fb5421ad597996224693802281fc3774bdfeee202438 WatchSource:0}: Error finding container 13f9e9db5295b0c52bc8fb5421ad597996224693802281fc3774bdfeee202438: Status 404 returned error can't find the container with id 13f9e9db5295b0c52bc8fb5421ad597996224693802281fc3774bdfeee202438 Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.320247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5ptv6"] Mar 12 16:24:45 crc kubenswrapper[4687]: W0312 16:24:45.333520 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7e01d7_21f4_49be_8700_994e45280f37.slice/crio-9f14638c6a5a91f190ad6e4aa329d0d79c77155f1c74c7d5fb48289df3966bbd WatchSource:0}: Error finding container 9f14638c6a5a91f190ad6e4aa329d0d79c77155f1c74c7d5fb48289df3966bbd: Status 404 returned error can't find the container with id 9f14638c6a5a91f190ad6e4aa329d0d79c77155f1c74c7d5fb48289df3966bbd Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.341662 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lp6js"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.381922 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.576660 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.669201 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.697383 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.774043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5ptv6" event={"ID":"5649bea8-d269-4e13-bcf9-8e58f4cdb132","Type":"ContainerStarted","Data":"5362aa3e93684f15ac2bd4bb4dd1c04a189272de9a185d005270dca5936541b8"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.787348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jqpm6" event={"ID":"eda73682-741c-41fd-80be-97995f73b1e1","Type":"ContainerStarted","Data":"dbfdc18c61b8236292443d6e9b81a4436b38f6691c04f18cbfa7ee77fc877cc0"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.787399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jqpm6" event={"ID":"eda73682-741c-41fd-80be-97995f73b1e1","Type":"ContainerStarted","Data":"750eed0692f29f5cc61d0f4d6347d8d3acf7a5d5d366043da794fb829da5523e"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.789628 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bqj4x" event={"ID":"8483f386-39e7-45eb-bd05-02dcdce7677f","Type":"ContainerStarted","Data":"13f9e9db5295b0c52bc8fb5421ad597996224693802281fc3774bdfeee202438"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.801145 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" event={"ID":"38be6432-e14f-401f-bb52-f0266cffb420","Type":"ContainerStarted","Data":"be853e3043cbc74e06d9a1ac5ea62d58a17127bcda3369ad4a97f3960b87af9c"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.814859 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jqpm6" podStartSLOduration=2.814834496 podStartE2EDuration="2.814834496s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:45.807061115 +0000 UTC m=+1334.771023459" watchObservedRunningTime="2026-03-12 16:24:45.814834496 +0000 UTC m=+1334.778796850" Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.837457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerStarted","Data":"9f14638c6a5a91f190ad6e4aa329d0d79c77155f1c74c7d5fb48289df3966bbd"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.850287 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4475047-ada2-4639-a0f1-2278c9518e5b" containerID="08962de09c31cb293d0e9bcb0cefa5fd9114a75bf6bdb18fdbf454a8a8722e17" exitCode=0 Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.850864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" event={"ID":"f4475047-ada2-4639-a0f1-2278c9518e5b","Type":"ContainerDied","Data":"08962de09c31cb293d0e9bcb0cefa5fd9114a75bf6bdb18fdbf454a8a8722e17"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.874719 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f2a199d-f910-4280-8a19-b1b454ce4f05","Type":"ContainerStarted","Data":"71129e9c019f0b7d15f00d55507315bd8a35e0f607e4390a278de2cf0655a714"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.877574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zd9d7" event={"ID":"9846a398-b104-418e-ace9-6eb022ddacbb","Type":"ContainerStarted","Data":"a2ea8af0b437ccee7ebcbd58e37909626980ec5cf364241c9de4aa41010091c1"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.880323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9mcqx" event={"ID":"de14ef1d-d5a7-490a-a522-8d4cf39989b4","Type":"ContainerStarted","Data":"03f76ac147ab1adec807c9c7f2752d3672c04ff238cfe4d0297a20b671fd68ca"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.886668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g7nzf" event={"ID":"45691b3d-ae9c-4c7d-b053-76db72b4ca3d","Type":"ContainerStarted","Data":"6340e8790af705ccc4145206bdca7b0f3920646ee90ca2fc1000668d160c4a09"} Mar 12 16:24:45 crc kubenswrapper[4687]: I0312 16:24:45.921647 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:24:45 crc kubenswrapper[4687]: W0312 16:24:45.925586 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4158411_09d5_4a5a_8032_20e25b26d36e.slice/crio-b4cab2b456e7c17f0647d6bb498b65bba0e76943a536e45980947faca32a0eb6 WatchSource:0}: Error finding container b4cab2b456e7c17f0647d6bb498b65bba0e76943a536e45980947faca32a0eb6: Status 404 returned error can't find the container with id b4cab2b456e7c17f0647d6bb498b65bba0e76943a536e45980947faca32a0eb6 Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.032807 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g7nzf" podStartSLOduration=4.032782823 podStartE2EDuration="4.032782823s" podCreationTimestamp="2026-03-12 16:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:45.924346484 +0000 UTC m=+1334.888308828" watchObservedRunningTime="2026-03-12 16:24:46.032782823 +0000 UTC m=+1334.996745167" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.060106 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.589214 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.647612 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-nb\") pod \"f4475047-ada2-4639-a0f1-2278c9518e5b\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.647892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf477\" (UniqueName: \"kubernetes.io/projected/f4475047-ada2-4639-a0f1-2278c9518e5b-kube-api-access-tf477\") pod \"f4475047-ada2-4639-a0f1-2278c9518e5b\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.648024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-sb\") pod \"f4475047-ada2-4639-a0f1-2278c9518e5b\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.648699 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-svc\") pod \"f4475047-ada2-4639-a0f1-2278c9518e5b\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.648829 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-swift-storage-0\") pod \"f4475047-ada2-4639-a0f1-2278c9518e5b\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.648897 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-config\") pod \"f4475047-ada2-4639-a0f1-2278c9518e5b\" (UID: \"f4475047-ada2-4639-a0f1-2278c9518e5b\") " Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.667041 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4475047-ada2-4639-a0f1-2278c9518e5b-kube-api-access-tf477" (OuterVolumeSpecName: "kube-api-access-tf477") pod "f4475047-ada2-4639-a0f1-2278c9518e5b" (UID: "f4475047-ada2-4639-a0f1-2278c9518e5b"). InnerVolumeSpecName "kube-api-access-tf477". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.682965 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4475047-ada2-4639-a0f1-2278c9518e5b" (UID: "f4475047-ada2-4639-a0f1-2278c9518e5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.687040 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-config" (OuterVolumeSpecName: "config") pod "f4475047-ada2-4639-a0f1-2278c9518e5b" (UID: "f4475047-ada2-4639-a0f1-2278c9518e5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.695563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4475047-ada2-4639-a0f1-2278c9518e5b" (UID: "f4475047-ada2-4639-a0f1-2278c9518e5b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.708448 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4475047-ada2-4639-a0f1-2278c9518e5b" (UID: "f4475047-ada2-4639-a0f1-2278c9518e5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.711062 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4475047-ada2-4639-a0f1-2278c9518e5b" (UID: "f4475047-ada2-4639-a0f1-2278c9518e5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.751947 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.751981 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.751991 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf477\" (UniqueName: \"kubernetes.io/projected/f4475047-ada2-4639-a0f1-2278c9518e5b-kube-api-access-tf477\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.752000 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.752240 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.752249 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4475047-ada2-4639-a0f1-2278c9518e5b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.901242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f4158411-09d5-4a5a-8032-20e25b26d36e","Type":"ContainerStarted","Data":"b4cab2b456e7c17f0647d6bb498b65bba0e76943a536e45980947faca32a0eb6"} Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.903741 4687 generic.go:334] "Generic (PLEG): container finished" podID="38be6432-e14f-401f-bb52-f0266cffb420" containerID="33e01f46614fba73f0d7544164e28ae7f9204961a15a70386e58bfe7ec6babca" exitCode=0 Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.903881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" event={"ID":"38be6432-e14f-401f-bb52-f0266cffb420","Type":"ContainerDied","Data":"33e01f46614fba73f0d7544164e28ae7f9204961a15a70386e58bfe7ec6babca"} Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.906350 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.906869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-49nzz" event={"ID":"f4475047-ada2-4639-a0f1-2278c9518e5b","Type":"ContainerDied","Data":"5643cee1b92d014b85db3999f196194c0552977c9496e05558e1fff328ab7de5"} Mar 12 16:24:46 crc kubenswrapper[4687]: I0312 16:24:46.906921 4687 scope.go:117] "RemoveContainer" containerID="08962de09c31cb293d0e9bcb0cefa5fd9114a75bf6bdb18fdbf454a8a8722e17" Mar 12 16:24:47 crc kubenswrapper[4687]: I0312 16:24:47.019721 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-49nzz"] Mar 12 16:24:47 crc kubenswrapper[4687]: I0312 16:24:47.034503 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-49nzz"] Mar 12 16:24:47 crc kubenswrapper[4687]: I0312 16:24:47.754814 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4475047-ada2-4639-a0f1-2278c9518e5b" path="/var/lib/kubelet/pods/f4475047-ada2-4639-a0f1-2278c9518e5b/volumes" Mar 12 16:24:47 crc kubenswrapper[4687]: I0312 16:24:47.949749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f4158411-09d5-4a5a-8032-20e25b26d36e","Type":"ContainerStarted","Data":"b363dbd02f7bb7609aa016c860061d551ac2373d948e59c2dded4d8918129244"} Mar 12 16:24:47 crc kubenswrapper[4687]: I0312 16:24:47.984867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" event={"ID":"38be6432-e14f-401f-bb52-f0266cffb420","Type":"ContainerStarted","Data":"875ab923c2128f241cde63c294be12dd7345300a7c2ecd6959292bbe51c0a207"} Mar 12 16:24:47 crc kubenswrapper[4687]: I0312 16:24:47.985462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:48 crc kubenswrapper[4687]: I0312 16:24:48.019256 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f2a199d-f910-4280-8a19-b1b454ce4f05","Type":"ContainerStarted","Data":"1580f9cf67a5bf2e83f7642389d6a8aae6b5c2a79a9dd44721f17c4fbfd86f83"} Mar 12 16:24:48 crc kubenswrapper[4687]: I0312 16:24:48.024568 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" podStartSLOduration=5.024549798 podStartE2EDuration="5.024549798s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:48.019389128 +0000 UTC m=+1336.983351482" watchObservedRunningTime="2026-03-12 16:24:48.024549798 +0000 UTC m=+1336.988512142" Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.032233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f2a199d-f910-4280-8a19-b1b454ce4f05","Type":"ContainerStarted","Data":"69b5ac23464b55a181aec9f8d1c17e784c126f3352c33402256bbee3307c782a"} Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.032400 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-httpd" containerID="cri-o://69b5ac23464b55a181aec9f8d1c17e784c126f3352c33402256bbee3307c782a" gracePeriod=30 Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.032347 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-log" containerID="cri-o://1580f9cf67a5bf2e83f7642389d6a8aae6b5c2a79a9dd44721f17c4fbfd86f83" gracePeriod=30 Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.036831 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-log" containerID="cri-o://b363dbd02f7bb7609aa016c860061d551ac2373d948e59c2dded4d8918129244" gracePeriod=30 Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.036930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f4158411-09d5-4a5a-8032-20e25b26d36e","Type":"ContainerStarted","Data":"8c77c79bc73fa982a193908bc638c66d386e053810154c26441d60e1d54719fd"} Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.036999 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-httpd" containerID="cri-o://8c77c79bc73fa982a193908bc638c66d386e053810154c26441d60e1d54719fd" gracePeriod=30 Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.056824 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.056809034 podStartE2EDuration="6.056809034s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:49.055453867 +0000 UTC m=+1338.019416211" watchObservedRunningTime="2026-03-12 16:24:49.056809034 +0000 UTC m=+1338.020771378" Mar 12 16:24:49 crc kubenswrapper[4687]: I0312 16:24:49.082582 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.082475974 podStartE2EDuration="6.082475974s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:24:49.081686512 +0000 UTC m=+1338.045648856" watchObservedRunningTime="2026-03-12 16:24:49.082475974 +0000 UTC m=+1338.046438318" Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.049480 4687 generic.go:334] "Generic (PLEG): container finished" podID="45691b3d-ae9c-4c7d-b053-76db72b4ca3d" containerID="6340e8790af705ccc4145206bdca7b0f3920646ee90ca2fc1000668d160c4a09" exitCode=0 Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.049559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g7nzf" event={"ID":"45691b3d-ae9c-4c7d-b053-76db72b4ca3d","Type":"ContainerDied","Data":"6340e8790af705ccc4145206bdca7b0f3920646ee90ca2fc1000668d160c4a09"} Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.053797 4687 generic.go:334] "Generic (PLEG): container finished" podID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerID="69b5ac23464b55a181aec9f8d1c17e784c126f3352c33402256bbee3307c782a" exitCode=0 Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.053975 4687 generic.go:334] "Generic (PLEG): container finished" podID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerID="1580f9cf67a5bf2e83f7642389d6a8aae6b5c2a79a9dd44721f17c4fbfd86f83" exitCode=143 Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.053890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f2a199d-f910-4280-8a19-b1b454ce4f05","Type":"ContainerDied","Data":"69b5ac23464b55a181aec9f8d1c17e784c126f3352c33402256bbee3307c782a"} Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.054189 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f2a199d-f910-4280-8a19-b1b454ce4f05","Type":"ContainerDied","Data":"1580f9cf67a5bf2e83f7642389d6a8aae6b5c2a79a9dd44721f17c4fbfd86f83"} Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.056665 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerID="8c77c79bc73fa982a193908bc638c66d386e053810154c26441d60e1d54719fd" exitCode=143 Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.056782 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerID="b363dbd02f7bb7609aa016c860061d551ac2373d948e59c2dded4d8918129244" exitCode=143 Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.056692 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f4158411-09d5-4a5a-8032-20e25b26d36e","Type":"ContainerDied","Data":"8c77c79bc73fa982a193908bc638c66d386e053810154c26441d60e1d54719fd"} Mar 12 16:24:50 crc kubenswrapper[4687]: I0312 16:24:50.057975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f4158411-09d5-4a5a-8032-20e25b26d36e","Type":"ContainerDied","Data":"b363dbd02f7bb7609aa016c860061d551ac2373d948e59c2dded4d8918129244"} Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.611451 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57gjt"] Mar 12 16:24:52 crc kubenswrapper[4687]: E0312 16:24:52.612648 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4475047-ada2-4639-a0f1-2278c9518e5b" containerName="init" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.612812 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4475047-ada2-4639-a0f1-2278c9518e5b" containerName="init" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.613791 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4475047-ada2-4639-a0f1-2278c9518e5b" containerName="init" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.616153 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.622167 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57gjt"] Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.730113 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl59\" (UniqueName: \"kubernetes.io/projected/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-kube-api-access-8jl59\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.730155 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-utilities\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.730267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-catalog-content\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.834507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl59\" (UniqueName: \"kubernetes.io/projected/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-kube-api-access-8jl59\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.834564 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-utilities\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.835268 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-utilities\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.835453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-catalog-content\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.835866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-catalog-content\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.872762 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl59\" (UniqueName: \"kubernetes.io/projected/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-kube-api-access-8jl59\") pod \"redhat-operators-57gjt\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:52 crc kubenswrapper[4687]: I0312 16:24:52.936899 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:24:54 crc kubenswrapper[4687]: I0312 16:24:54.131852 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:24:54 crc kubenswrapper[4687]: I0312 16:24:54.215037 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b44xh"] Mar 12 16:24:54 crc kubenswrapper[4687]: I0312 16:24:54.215597 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" containerID="cri-o://3c1bffbaaca8c35b89b7a77128e2905c6f88d5962c9843c013c9cb7d8ffc1acb" gracePeriod=10 Mar 12 16:24:55 crc kubenswrapper[4687]: I0312 16:24:55.078743 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Mar 12 16:24:55 crc kubenswrapper[4687]: I0312 16:24:55.157986 4687 generic.go:334] "Generic (PLEG): container finished" podID="e7ca2b54-480e-437e-951f-37487cf288da" containerID="3c1bffbaaca8c35b89b7a77128e2905c6f88d5962c9843c013c9cb7d8ffc1acb" exitCode=0 Mar 12 16:24:55 crc kubenswrapper[4687]: I0312 16:24:55.158035 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" event={"ID":"e7ca2b54-480e-437e-951f-37487cf288da","Type":"ContainerDied","Data":"3c1bffbaaca8c35b89b7a77128e2905c6f88d5962c9843c013c9cb7d8ffc1acb"} Mar 12 16:25:00 crc kubenswrapper[4687]: I0312 16:25:00.078223 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.072186 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.081613 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.099804 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189204 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-scripts\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189263 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-config-data\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-combined-ca-bundle\") pod \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189397 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-logs\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189418 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-config-data\") pod \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189444 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-combined-ca-bundle\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f45ps\" (UniqueName: \"kubernetes.io/projected/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-kube-api-access-f45ps\") pod \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189598 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-internal-tls-certs\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-httpd-run\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189746 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-scripts\") pod \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189797 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnn29\" (UniqueName: \"kubernetes.io/projected/f4158411-09d5-4a5a-8032-20e25b26d36e-kube-api-access-tnn29\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189822 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-credential-keys\") pod \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"f4158411-09d5-4a5a-8032-20e25b26d36e\" (UID: \"f4158411-09d5-4a5a-8032-20e25b26d36e\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189956 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-logs" (OuterVolumeSpecName: "logs") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.189980 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-fernet-keys\") pod \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\" (UID: \"45691b3d-ae9c-4c7d-b053-76db72b4ca3d\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.190682 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.194789 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.197967 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4158411-09d5-4a5a-8032-20e25b26d36e-kube-api-access-tnn29" (OuterVolumeSpecName: "kube-api-access-tnn29") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "kube-api-access-tnn29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.198599 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-scripts" (OuterVolumeSpecName: "scripts") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.200816 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-kube-api-access-f45ps" (OuterVolumeSpecName: "kube-api-access-f45ps") pod "45691b3d-ae9c-4c7d-b053-76db72b4ca3d" (UID: "45691b3d-ae9c-4c7d-b053-76db72b4ca3d"). InnerVolumeSpecName "kube-api-access-f45ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.206715 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "45691b3d-ae9c-4c7d-b053-76db72b4ca3d" (UID: "45691b3d-ae9c-4c7d-b053-76db72b4ca3d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.220532 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g7nzf" event={"ID":"45691b3d-ae9c-4c7d-b053-76db72b4ca3d","Type":"ContainerDied","Data":"0e5432f5506d89869be9a2826d4a4270e68d60fb4854557cfeb432c4b88d65d7"} Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.220559 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g7nzf" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.220571 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e5432f5506d89869be9a2826d4a4270e68d60fb4854557cfeb432c4b88d65d7" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.222311 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-scripts" (OuterVolumeSpecName: "scripts") pod "45691b3d-ae9c-4c7d-b053-76db72b4ca3d" (UID: "45691b3d-ae9c-4c7d-b053-76db72b4ca3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.226728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "45691b3d-ae9c-4c7d-b053-76db72b4ca3d" (UID: "45691b3d-ae9c-4c7d-b053-76db72b4ca3d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.227576 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f2a199d-f910-4280-8a19-b1b454ce4f05","Type":"ContainerDied","Data":"71129e9c019f0b7d15f00d55507315bd8a35e0f607e4390a278de2cf0655a714"} Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.227623 4687 scope.go:117] "RemoveContainer" containerID="69b5ac23464b55a181aec9f8d1c17e784c126f3352c33402256bbee3307c782a" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.227768 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.233711 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db" (OuterVolumeSpecName: "glance") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.247708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f4158411-09d5-4a5a-8032-20e25b26d36e","Type":"ContainerDied","Data":"b4cab2b456e7c17f0647d6bb498b65bba0e76943a536e45980947faca32a0eb6"} Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.247777 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.249861 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-config-data" (OuterVolumeSpecName: "config-data") pod "45691b3d-ae9c-4c7d-b053-76db72b4ca3d" (UID: "45691b3d-ae9c-4c7d-b053-76db72b4ca3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.257922 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.270262 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.270621 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45691b3d-ae9c-4c7d-b053-76db72b4ca3d" (UID: "45691b3d-ae9c-4c7d-b053-76db72b4ca3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292156 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-combined-ca-bundle\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292318 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-scripts\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292388 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-httpd-run\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292415 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-public-tls-certs\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292448 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-config-data\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292565 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-logs\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292616 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhm5x\" (UniqueName: \"kubernetes.io/projected/2f2a199d-f910-4280-8a19-b1b454ce4f05-kube-api-access-hhm5x\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.292730 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"2f2a199d-f910-4280-8a19-b1b454ce4f05\" (UID: \"2f2a199d-f910-4280-8a19-b1b454ce4f05\") " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293029 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293560 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-logs" (OuterVolumeSpecName: "logs") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293597 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnn29\" (UniqueName: \"kubernetes.io/projected/f4158411-09d5-4a5a-8032-20e25b26d36e-kube-api-access-tnn29\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293613 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293639 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") on node \"crc\" " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293652 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293663 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293671 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293681 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293690 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293699 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f45ps\" (UniqueName: \"kubernetes.io/projected/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-kube-api-access-f45ps\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293710 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293719 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293728 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4158411-09d5-4a5a-8032-20e25b26d36e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.293736 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45691b3d-ae9c-4c7d-b053-76db72b4ca3d-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.299578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-scripts" (OuterVolumeSpecName: "scripts") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.300912 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2a199d-f910-4280-8a19-b1b454ce4f05-kube-api-access-hhm5x" (OuterVolumeSpecName: "kube-api-access-hhm5x") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "kube-api-access-hhm5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.321175 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9" (OuterVolumeSpecName: "glance") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.321673 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-config-data" (OuterVolumeSpecName: "config-data") pod "f4158411-09d5-4a5a-8032-20e25b26d36e" (UID: "f4158411-09d5-4a5a-8032-20e25b26d36e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.326425 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.345009 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.345296 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db") on node "crc" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.350857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.378907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-config-data" (OuterVolumeSpecName: "config-data") pod "2f2a199d-f910-4280-8a19-b1b454ce4f05" (UID: "2f2a199d-f910-4280-8a19-b1b454ce4f05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395767 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395800 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395810 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395821 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395833 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4158411-09d5-4a5a-8032-20e25b26d36e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395843 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f2a199d-f910-4280-8a19-b1b454ce4f05-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395852 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhm5x\" (UniqueName: \"kubernetes.io/projected/2f2a199d-f910-4280-8a19-b1b454ce4f05-kube-api-access-hhm5x\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395884 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") on node \"crc\" " Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.395896 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f2a199d-f910-4280-8a19-b1b454ce4f05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.423031 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.423180 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9") on node "crc" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.498462 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.577287 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.604309 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624002 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.624451 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-httpd" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624471 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-httpd" Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.624487 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-log" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624494 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-log" Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.624527 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-log" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624534 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-log" Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.624550 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-httpd" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624558 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-httpd" Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.624576 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45691b3d-ae9c-4c7d-b053-76db72b4ca3d" containerName="keystone-bootstrap" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624583 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="45691b3d-ae9c-4c7d-b053-76db72b4ca3d" containerName="keystone-bootstrap" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624776 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-httpd" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624799 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-httpd" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624811 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" containerName="glance-log" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624822 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" containerName="glance-log" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.624829 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="45691b3d-ae9c-4c7d-b053-76db72b4ca3d" containerName="keystone-bootstrap" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.625975 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.628507 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gln9t" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.628734 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.628862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.634177 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.644028 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.667948 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.702861 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.732818 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Mar 12 16:25:01 crc kubenswrapper[4687]: E0312 16:25:01.733003 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56hfbh6hb9h5d6h585h647h654h57dhb4h55chf8h5fch5ffh565h55bh674h58bh5b9h68bh575h577h7dh694h647h668h569h675h5f5hch678h597q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7k7d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ef7e01d7-21f4-49be-8700-994e45280f37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.736699 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.766240 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2a199d-f910-4280-8a19-b1b454ce4f05" path="/var/lib/kubelet/pods/2f2a199d-f910-4280-8a19-b1b454ce4f05/volumes" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.769975 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4158411-09d5-4a5a-8032-20e25b26d36e" path="/var/lib/kubelet/pods/f4158411-09d5-4a5a-8032-20e25b26d36e/volumes" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.771911 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.774311 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.779386 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.781609 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.788794 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.803539 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.803586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.803619 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.803824 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.803866 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9nf4\" (UniqueName: \"kubernetes.io/projected/b6d81f6b-475e-4a5b-9fd8-006856dd645d-kube-api-access-l9nf4\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.803971 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.804374 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.804426 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-logs\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.906309 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.906400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.906921 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.906970 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-logs\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.906994 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907039 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907084 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907108 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907291 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp4p\" (UniqueName: \"kubernetes.io/projected/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-kube-api-access-7bp4p\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907472 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907522 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9nf4\" (UniqueName: \"kubernetes.io/projected/b6d81f6b-475e-4a5b-9fd8-006856dd645d-kube-api-access-l9nf4\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907581 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907688 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907709 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.907773 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-logs\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.912175 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.912553 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.913196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.916029 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.916841 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.916878 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b302cf2d10698cf3ec9c6d22e773973781febffa91dff6b5e0b4b9a361287f8c/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.943563 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9nf4\" (UniqueName: \"kubernetes.io/projected/b6d81f6b-475e-4a5b-9fd8-006856dd645d-kube-api-access-l9nf4\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.959339 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " pod="openstack/glance-default-external-api-0" Mar 12 16:25:01 crc kubenswrapper[4687]: I0312 16:25:01.965763 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010186 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010289 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010325 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010394 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.010466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bp4p\" (UniqueName: \"kubernetes.io/projected/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-kube-api-access-7bp4p\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.011285 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.011521 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.013917 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.013944 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac1099220b348776e40356de6ace90133c51ecbf0c7d7d5992bf76d1ea170c4e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.017685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.017779 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.017796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.018310 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.033759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bp4p\" (UniqueName: \"kubernetes.io/projected/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-kube-api-access-7bp4p\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.064666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.096269 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.245545 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g7nzf"] Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.253273 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g7nzf"] Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.328818 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v655j"] Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.330138 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.331793 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.332475 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.332570 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.333278 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg9js" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.333937 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.352255 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v655j"] Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.424254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-scripts\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.424309 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lflgf\" (UniqueName: \"kubernetes.io/projected/163922be-91e7-4655-8802-b92c4699bad8-kube-api-access-lflgf\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.424415 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-fernet-keys\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.424469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-combined-ca-bundle\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.424547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-credential-keys\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.424566 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-config-data\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.526222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-credential-keys\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.526277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-config-data\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.526390 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-scripts\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.526435 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lflgf\" (UniqueName: \"kubernetes.io/projected/163922be-91e7-4655-8802-b92c4699bad8-kube-api-access-lflgf\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.526489 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-fernet-keys\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.526556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-combined-ca-bundle\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.531330 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-credential-keys\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.531630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-fernet-keys\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.532341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-config-data\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.534760 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-combined-ca-bundle\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.546423 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-scripts\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.551871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lflgf\" (UniqueName: \"kubernetes.io/projected/163922be-91e7-4655-8802-b92c4699bad8-kube-api-access-lflgf\") pod \"keystone-bootstrap-v655j\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:02 crc kubenswrapper[4687]: I0312 16:25:02.676321 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:03 crc kubenswrapper[4687]: I0312 16:25:03.752543 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45691b3d-ae9c-4c7d-b053-76db72b4ca3d" path="/var/lib/kubelet/pods/45691b3d-ae9c-4c7d-b053-76db72b4ca3d/volumes" Mar 12 16:25:06 crc kubenswrapper[4687]: I0312 16:25:06.298566 4687 generic.go:334] "Generic (PLEG): container finished" podID="eda73682-741c-41fd-80be-97995f73b1e1" containerID="dbfdc18c61b8236292443d6e9b81a4436b38f6691c04f18cbfa7ee77fc877cc0" exitCode=0 Mar 12 16:25:06 crc kubenswrapper[4687]: I0312 16:25:06.298727 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jqpm6" event={"ID":"eda73682-741c-41fd-80be-97995f73b1e1","Type":"ContainerDied","Data":"dbfdc18c61b8236292443d6e9b81a4436b38f6691c04f18cbfa7ee77fc877cc0"} Mar 12 16:25:10 crc kubenswrapper[4687]: I0312 16:25:10.078698 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Mar 12 16:25:10 crc kubenswrapper[4687]: I0312 16:25:10.079413 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:25:11 crc kubenswrapper[4687]: E0312 16:25:11.026581 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 12 16:25:11 crc kubenswrapper[4687]: E0312 16:25:11.027039 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdghq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-zd9d7_openstack(9846a398-b104-418e-ace9-6eb022ddacbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:25:11 crc kubenswrapper[4687]: E0312 16:25:11.028199 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-zd9d7" podUID="9846a398-b104-418e-ace9-6eb022ddacbb" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.158348 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.165784 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252417 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-sb\") pod \"e7ca2b54-480e-437e-951f-37487cf288da\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252579 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-nb\") pod \"e7ca2b54-480e-437e-951f-37487cf288da\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252615 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-combined-ca-bundle\") pod \"eda73682-741c-41fd-80be-97995f73b1e1\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252656 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbgm\" (UniqueName: \"kubernetes.io/projected/eda73682-741c-41fd-80be-97995f73b1e1-kube-api-access-jhbgm\") pod \"eda73682-741c-41fd-80be-97995f73b1e1\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252814 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-config\") pod \"eda73682-741c-41fd-80be-97995f73b1e1\" (UID: \"eda73682-741c-41fd-80be-97995f73b1e1\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-config\") pod \"e7ca2b54-480e-437e-951f-37487cf288da\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252939 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwfvq\" (UniqueName: \"kubernetes.io/projected/e7ca2b54-480e-437e-951f-37487cf288da-kube-api-access-rwfvq\") pod \"e7ca2b54-480e-437e-951f-37487cf288da\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.252958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-swift-storage-0\") pod \"e7ca2b54-480e-437e-951f-37487cf288da\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.253006 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-svc\") pod \"e7ca2b54-480e-437e-951f-37487cf288da\" (UID: \"e7ca2b54-480e-437e-951f-37487cf288da\") " Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.264612 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ca2b54-480e-437e-951f-37487cf288da-kube-api-access-rwfvq" (OuterVolumeSpecName: "kube-api-access-rwfvq") pod "e7ca2b54-480e-437e-951f-37487cf288da" (UID: "e7ca2b54-480e-437e-951f-37487cf288da"). InnerVolumeSpecName "kube-api-access-rwfvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.276330 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda73682-741c-41fd-80be-97995f73b1e1-kube-api-access-jhbgm" (OuterVolumeSpecName: "kube-api-access-jhbgm") pod "eda73682-741c-41fd-80be-97995f73b1e1" (UID: "eda73682-741c-41fd-80be-97995f73b1e1"). InnerVolumeSpecName "kube-api-access-jhbgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.308719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7ca2b54-480e-437e-951f-37487cf288da" (UID: "e7ca2b54-480e-437e-951f-37487cf288da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.309949 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-config" (OuterVolumeSpecName: "config") pod "e7ca2b54-480e-437e-951f-37487cf288da" (UID: "e7ca2b54-480e-437e-951f-37487cf288da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.310990 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eda73682-741c-41fd-80be-97995f73b1e1" (UID: "eda73682-741c-41fd-80be-97995f73b1e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.317840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7ca2b54-480e-437e-951f-37487cf288da" (UID: "e7ca2b54-480e-437e-951f-37487cf288da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.322119 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7ca2b54-480e-437e-951f-37487cf288da" (UID: "e7ca2b54-480e-437e-951f-37487cf288da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.330288 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7ca2b54-480e-437e-951f-37487cf288da" (UID: "e7ca2b54-480e-437e-951f-37487cf288da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.339555 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-config" (OuterVolumeSpecName: "config") pod "eda73682-741c-41fd-80be-97995f73b1e1" (UID: "eda73682-741c-41fd-80be-97995f73b1e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.342751 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.342762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" event={"ID":"e7ca2b54-480e-437e-951f-37487cf288da","Type":"ContainerDied","Data":"05d8d038af9318ef6f5e32d78cc74917d8f273e601df68e6d7dc41426e0fe0e6"} Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.349807 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jqpm6" event={"ID":"eda73682-741c-41fd-80be-97995f73b1e1","Type":"ContainerDied","Data":"750eed0692f29f5cc61d0f4d6347d8d3acf7a5d5d366043da794fb829da5523e"} Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.349841 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750eed0692f29f5cc61d0f4d6347d8d3acf7a5d5d366043da794fb829da5523e" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.349900 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jqpm6" Mar 12 16:25:11 crc kubenswrapper[4687]: E0312 16:25:11.352645 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-zd9d7" podUID="9846a398-b104-418e-ace9-6eb022ddacbb" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355248 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbgm\" (UniqueName: \"kubernetes.io/projected/eda73682-741c-41fd-80be-97995f73b1e1-kube-api-access-jhbgm\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355280 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355293 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355303 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwfvq\" (UniqueName: \"kubernetes.io/projected/e7ca2b54-480e-437e-951f-37487cf288da-kube-api-access-rwfvq\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355312 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355320 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355328 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355338 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7ca2b54-480e-437e-951f-37487cf288da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.355346 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda73682-741c-41fd-80be-97995f73b1e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.456234 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b44xh"] Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.482298 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-b44xh"] Mar 12 16:25:11 crc kubenswrapper[4687]: I0312 16:25:11.748858 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ca2b54-480e-437e-951f-37487cf288da" path="/var/lib/kubelet/pods/e7ca2b54-480e-437e-951f-37487cf288da/volumes" Mar 12 16:25:12 crc kubenswrapper[4687]: E0312 16:25:12.529421 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 12 16:25:12 crc kubenswrapper[4687]: E0312 16:25:12.529844 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dznbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9mcqx_openstack(de14ef1d-d5a7-490a-a522-8d4cf39989b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.536098 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c95nn"] Mar 12 16:25:12 crc kubenswrapper[4687]: E0312 16:25:12.536310 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9mcqx" podUID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" Mar 12 16:25:12 crc kubenswrapper[4687]: E0312 16:25:12.537021 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda73682-741c-41fd-80be-97995f73b1e1" containerName="neutron-db-sync" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.537040 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda73682-741c-41fd-80be-97995f73b1e1" containerName="neutron-db-sync" Mar 12 16:25:12 crc kubenswrapper[4687]: E0312 16:25:12.537063 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="init" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.537069 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="init" Mar 12 16:25:12 crc kubenswrapper[4687]: E0312 16:25:12.537088 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.537093 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.537283 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda73682-741c-41fd-80be-97995f73b1e1" containerName="neutron-db-sync" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.537302 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.538331 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.561903 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c95nn"] Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.636992 4687 scope.go:117] "RemoveContainer" containerID="1580f9cf67a5bf2e83f7642389d6a8aae6b5c2a79a9dd44721f17c4fbfd86f83" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.691284 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.691450 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.691499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-config\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.691526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrng\" (UniqueName: \"kubernetes.io/projected/c15f16c6-ca65-49db-931f-d62df73da465-kube-api-access-mnrng\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.691569 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-svc\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.691592 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.713462 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-784df9f744-9vs8d"] Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.715326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.737716 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6zb78" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.738794 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.738823 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.741347 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-784df9f744-9vs8d"] Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.757288 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.793891 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.794153 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-config\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.794182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrng\" (UniqueName: \"kubernetes.io/projected/c15f16c6-ca65-49db-931f-d62df73da465-kube-api-access-mnrng\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.794224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-svc\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.794244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.794328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.795171 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.795685 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.796187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-config\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.796821 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-svc\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.807603 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.825879 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrng\" (UniqueName: \"kubernetes.io/projected/c15f16c6-ca65-49db-931f-d62df73da465-kube-api-access-mnrng\") pod \"dnsmasq-dns-55f844cf75-c95nn\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.892548 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.896074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc568\" (UniqueName: \"kubernetes.io/projected/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-kube-api-access-bc568\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.896474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-combined-ca-bundle\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.896669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-config\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.896752 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-httpd-config\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:12 crc kubenswrapper[4687]: I0312 16:25:12.896869 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-ovndb-tls-certs\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.001419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-config\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.001487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-httpd-config\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.001559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-ovndb-tls-certs\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.001653 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc568\" (UniqueName: \"kubernetes.io/projected/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-kube-api-access-bc568\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.001689 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-combined-ca-bundle\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.031976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-httpd-config\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.032412 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc568\" (UniqueName: \"kubernetes.io/projected/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-kube-api-access-bc568\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.032417 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-combined-ca-bundle\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.032988 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-ovndb-tls-certs\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.033045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-config\") pod \"neutron-784df9f744-9vs8d\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.155016 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.370183 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57gjt"] Mar 12 16:25:13 crc kubenswrapper[4687]: E0312 16:25:13.403702 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9mcqx" podUID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.531992 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.606901 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v655j"] Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.662680 4687 scope.go:117] "RemoveContainer" containerID="8c77c79bc73fa982a193908bc638c66d386e053810154c26441d60e1d54719fd" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.911323 4687 scope.go:117] "RemoveContainer" containerID="b363dbd02f7bb7609aa016c860061d551ac2373d948e59c2dded4d8918129244" Mar 12 16:25:13 crc kubenswrapper[4687]: I0312 16:25:13.990164 4687 scope.go:117] "RemoveContainer" containerID="3c1bffbaaca8c35b89b7a77128e2905c6f88d5962c9843c013c9cb7d8ffc1acb" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.078997 4687 scope.go:117] "RemoveContainer" containerID="41f611ff77ae05ae6f437a0e54feb2215f2061a735acb6f97f0a76c3464bf833" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.122049 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.122101 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.122144 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.123016 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6a61ada4aa1cfe2e90a3b1ef14049216d76e56f45c1ef544713761e4d6a8f41"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.123074 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://a6a61ada4aa1cfe2e90a3b1ef14049216d76e56f45c1ef544713761e4d6a8f41" gracePeriod=600 Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.313642 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:25:14 crc kubenswrapper[4687]: W0312 16:25:14.345857 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d81f6b_475e_4a5b_9fd8_006856dd645d.slice/crio-b2765b971ec2402eeed4fe3b127c14a30f1d85f5cff205f9e8986caf731fd36b WatchSource:0}: Error finding container b2765b971ec2402eeed4fe3b127c14a30f1d85f5cff205f9e8986caf731fd36b: Status 404 returned error can't find the container with id b2765b971ec2402eeed4fe3b127c14a30f1d85f5cff205f9e8986caf731fd36b Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.452241 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="a6a61ada4aa1cfe2e90a3b1ef14049216d76e56f45c1ef544713761e4d6a8f41" exitCode=0 Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.452315 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"a6a61ada4aa1cfe2e90a3b1ef14049216d76e56f45c1ef544713761e4d6a8f41"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.452346 4687 scope.go:117] "RemoveContainer" containerID="f1ffdc3800e5c81c03048cac69c6e0d74a9a699fce31dea6098c57fea14e8d99" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.472023 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerStarted","Data":"f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.494997 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c95nn"] Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.499405 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6d81f6b-475e-4a5b-9fd8-006856dd645d","Type":"ContainerStarted","Data":"b2765b971ec2402eeed4fe3b127c14a30f1d85f5cff205f9e8986caf731fd36b"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.524399 4687 generic.go:334] "Generic (PLEG): container finished" podID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerID="82d8b337c242758d743ba73ffa4db75407e2644bd2c642ca03f61a17b584cba6" exitCode=0 Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.524472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57gjt" event={"ID":"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb","Type":"ContainerDied","Data":"82d8b337c242758d743ba73ffa4db75407e2644bd2c642ca03f61a17b584cba6"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.524516 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57gjt" event={"ID":"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb","Type":"ContainerStarted","Data":"a7b89cd7791f05f9a67dc61ea30862727088c27b1fa7d737f82e7f4daf1002ba"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.537326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v655j" event={"ID":"163922be-91e7-4655-8802-b92c4699bad8","Type":"ContainerStarted","Data":"8a64a5f454239ed79cc9a4dcaf04d01f5efb59cde19c9a99bca8ac98dcc26034"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.537571 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v655j" event={"ID":"163922be-91e7-4655-8802-b92c4699bad8","Type":"ContainerStarted","Data":"0b29b1d15fc32e396510e471c22fbc8da9b3a5e02b822cba9e0547bcf4511c21"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.565265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bqj4x" event={"ID":"8483f386-39e7-45eb-bd05-02dcdce7677f","Type":"ContainerStarted","Data":"68f9290961698fcc142d1c9d4a3d15b482223729176554885285d7fb51dafd38"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.569936 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0","Type":"ContainerStarted","Data":"b940cd7b21acfb934dff9664bf4ceaa37227d75fd28120774131b6630ec0f74b"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.584546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5ptv6" event={"ID":"5649bea8-d269-4e13-bcf9-8e58f4cdb132","Type":"ContainerStarted","Data":"706ffe0ee0580ee5c68a799ee5b454a588dd0edbb5186508b697c381e8d06238"} Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.622559 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v655j" podStartSLOduration=12.622517521 podStartE2EDuration="12.622517521s" podCreationTimestamp="2026-03-12 16:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:14.583859646 +0000 UTC m=+1363.547821990" watchObservedRunningTime="2026-03-12 16:25:14.622517521 +0000 UTC m=+1363.586479865" Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.660625 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bqj4x" podStartSLOduration=4.39499128 podStartE2EDuration="31.66060214s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="2026-03-12 16:24:45.156920815 +0000 UTC m=+1334.120883159" lastFinishedPulling="2026-03-12 16:25:12.422531675 +0000 UTC m=+1361.386494019" observedRunningTime="2026-03-12 16:25:14.608855588 +0000 UTC m=+1363.572817932" watchObservedRunningTime="2026-03-12 16:25:14.66060214 +0000 UTC m=+1363.624564484" Mar 12 16:25:14 crc kubenswrapper[4687]: W0312 16:25:14.708507 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3641fb4f_ebfa_4d32_a37c_ca304c44ccab.slice/crio-3b1d40c1bcd374f05576d12410233c9c546189bbe71d2b66b2a16a6c6a628b68 WatchSource:0}: Error finding container 3b1d40c1bcd374f05576d12410233c9c546189bbe71d2b66b2a16a6c6a628b68: Status 404 returned error can't find the container with id 3b1d40c1bcd374f05576d12410233c9c546189bbe71d2b66b2a16a6c6a628b68 Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.799015 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-784df9f744-9vs8d"] Mar 12 16:25:14 crc kubenswrapper[4687]: I0312 16:25:14.816290 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5ptv6" podStartSLOduration=6.131372478 podStartE2EDuration="31.816260907s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="2026-03-12 16:24:45.327183751 +0000 UTC m=+1334.291146095" lastFinishedPulling="2026-03-12 16:25:11.01207218 +0000 UTC m=+1359.976034524" observedRunningTime="2026-03-12 16:25:14.643480113 +0000 UTC m=+1363.607442467" watchObservedRunningTime="2026-03-12 16:25:14.816260907 +0000 UTC m=+1363.780223251" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.079610 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-b44xh" podUID="e7ca2b54-480e-437e-951f-37487cf288da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.117001 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68cdbd957f-bt7lg"] Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.122921 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.129733 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.129937 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.142074 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68cdbd957f-bt7lg"] Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.289574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-public-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.289903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-internal-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.289933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdkt\" (UniqueName: \"kubernetes.io/projected/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-kube-api-access-wzdkt\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.289960 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-combined-ca-bundle\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.290012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-httpd-config\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.290043 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-config\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.290080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-ovndb-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.391878 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-httpd-config\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.391938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-config\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.391966 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-ovndb-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.392080 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-public-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.392125 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-internal-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.392146 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdkt\" (UniqueName: \"kubernetes.io/projected/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-kube-api-access-wzdkt\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.392169 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-combined-ca-bundle\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.398476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-public-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.399736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-internal-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.400001 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-combined-ca-bundle\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.400156 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-config\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.400223 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-ovndb-tls-certs\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.409796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-httpd-config\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.424110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdkt\" (UniqueName: \"kubernetes.io/projected/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-kube-api-access-wzdkt\") pod \"neutron-68cdbd957f-bt7lg\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.466968 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.635423 4687 generic.go:334] "Generic (PLEG): container finished" podID="c15f16c6-ca65-49db-931f-d62df73da465" containerID="4ece12bc9af7ebaad8d254643ab96d23d6f56a775e868a540bb91d32250926ba" exitCode=0 Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.635484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" event={"ID":"c15f16c6-ca65-49db-931f-d62df73da465","Type":"ContainerDied","Data":"4ece12bc9af7ebaad8d254643ab96d23d6f56a775e868a540bb91d32250926ba"} Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.635511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" event={"ID":"c15f16c6-ca65-49db-931f-d62df73da465","Type":"ContainerStarted","Data":"1c0184e0506ef8e6f744654f7fd39ace055e92512595d5648a915c524840939c"} Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.688054 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0","Type":"ContainerStarted","Data":"97fd5a5ead64e55a87cb413325440d309964310bd709e90795867c2632966c50"} Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.720879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerStarted","Data":"342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf"} Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.720923 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerStarted","Data":"3b1d40c1bcd374f05576d12410233c9c546189bbe71d2b66b2a16a6c6a628b68"} Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.722742 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.784075 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4"} Mar 12 16:25:15 crc kubenswrapper[4687]: I0312 16:25:15.799118 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-784df9f744-9vs8d" podStartSLOduration=3.799099264 podStartE2EDuration="3.799099264s" podCreationTimestamp="2026-03-12 16:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:15.763166364 +0000 UTC m=+1364.727128708" watchObservedRunningTime="2026-03-12 16:25:15.799099264 +0000 UTC m=+1364.763061608" Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.509436 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68cdbd957f-bt7lg"] Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.808743 4687 generic.go:334] "Generic (PLEG): container finished" podID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerID="93f50d0260b22a69b834c0a00acc8050fadcde322b69ab711c598bcb4e0e2e61" exitCode=0 Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.808878 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57gjt" event={"ID":"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb","Type":"ContainerDied","Data":"93f50d0260b22a69b834c0a00acc8050fadcde322b69ab711c598bcb4e0e2e61"} Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.817689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cdbd957f-bt7lg" event={"ID":"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1","Type":"ContainerStarted","Data":"cd2f0629ca3a46d87cc9cbd29b1a0c7a100680cce5184d0e9be1006921abaa87"} Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.829698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" event={"ID":"c15f16c6-ca65-49db-931f-d62df73da465","Type":"ContainerStarted","Data":"6b55d40ab435020e60158b8ab39b028d06aaa300834b855272a05efefc836b8e"} Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.829736 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.875131 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6d81f6b-475e-4a5b-9fd8-006856dd645d","Type":"ContainerStarted","Data":"8b424bbb68f36f99a42bfe10ee37f0b99bf3b64c1459d6ddb6162a8173d24192"} Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.880941 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/0.log" Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.883796 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerStarted","Data":"8f95ed9baf2cd86471f2c2deb6370710f94714513976d01f2d99b5fd2dd40608"} Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.884063 4687 scope.go:117] "RemoveContainer" containerID="8f95ed9baf2cd86471f2c2deb6370710f94714513976d01f2d99b5fd2dd40608" Mar 12 16:25:16 crc kubenswrapper[4687]: I0312 16:25:16.908250 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" podStartSLOduration=4.908223536 podStartE2EDuration="4.908223536s" podCreationTimestamp="2026-03-12 16:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:16.901280387 +0000 UTC m=+1365.865242731" watchObservedRunningTime="2026-03-12 16:25:16.908223536 +0000 UTC m=+1365.872185880" Mar 12 16:25:17 crc kubenswrapper[4687]: I0312 16:25:17.917657 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cdbd957f-bt7lg" event={"ID":"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1","Type":"ContainerStarted","Data":"3ddc8db5928616656f2964d46677648cda6f9c7ffd840ee20ebb88fb57e174fb"} Mar 12 16:25:17 crc kubenswrapper[4687]: I0312 16:25:17.918278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cdbd957f-bt7lg" event={"ID":"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1","Type":"ContainerStarted","Data":"01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3"} Mar 12 16:25:17 crc kubenswrapper[4687]: I0312 16:25:17.919530 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:17 crc kubenswrapper[4687]: I0312 16:25:17.944536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6d81f6b-475e-4a5b-9fd8-006856dd645d","Type":"ContainerStarted","Data":"f7ded872c83d08118a23a01f5b2fa960a3a7411e0e4f4156b7d0a79aaab369ac"} Mar 12 16:25:17 crc kubenswrapper[4687]: I0312 16:25:17.970477 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68cdbd957f-bt7lg" podStartSLOduration=2.970453809 podStartE2EDuration="2.970453809s" podCreationTimestamp="2026-03-12 16:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:17.954909335 +0000 UTC m=+1366.918871689" watchObservedRunningTime="2026-03-12 16:25:17.970453809 +0000 UTC m=+1366.934416153" Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.004003 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0","Type":"ContainerStarted","Data":"1661c28baa01e9cfaa3aa820ae239bdea1de27b52ffe4d86baf84f587eef3221"} Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.013705 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.010345658 podStartE2EDuration="17.010345658s" podCreationTimestamp="2026-03-12 16:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:18.007050977 +0000 UTC m=+1366.971013331" watchObservedRunningTime="2026-03-12 16:25:18.010345658 +0000 UTC m=+1366.974308002" Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.026022 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/0.log" Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.026452 4687 generic.go:334] "Generic (PLEG): container finished" podID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerID="8f95ed9baf2cd86471f2c2deb6370710f94714513976d01f2d99b5fd2dd40608" exitCode=1 Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.026506 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerDied","Data":"8f95ed9baf2cd86471f2c2deb6370710f94714513976d01f2d99b5fd2dd40608"} Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.026533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerStarted","Data":"d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54"} Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.027461 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.034287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57gjt" event={"ID":"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb","Type":"ContainerStarted","Data":"9e03df690c2ab0a0255523ecfa32f26fa1ebe41e3863e2dce435b26c4b3e0293"} Mar 12 16:25:18 crc kubenswrapper[4687]: I0312 16:25:18.081160 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=17.081140659 podStartE2EDuration="17.081140659s" podCreationTimestamp="2026-03-12 16:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:18.061914334 +0000 UTC m=+1367.025876688" watchObservedRunningTime="2026-03-12 16:25:18.081140659 +0000 UTC m=+1367.045103003" Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.058000 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/1.log" Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.063761 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/0.log" Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.064322 4687 generic.go:334] "Generic (PLEG): container finished" podID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerID="d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54" exitCode=1 Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.064453 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerDied","Data":"d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54"} Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.064508 4687 scope.go:117] "RemoveContainer" containerID="8f95ed9baf2cd86471f2c2deb6370710f94714513976d01f2d99b5fd2dd40608" Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.065554 4687 scope.go:117] "RemoveContainer" containerID="d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54" Mar 12 16:25:19 crc kubenswrapper[4687]: E0312 16:25:19.066695 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-784df9f744-9vs8d_openstack(3641fb4f-ebfa-4d32-a37c-ca304c44ccab)\"" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" Mar 12 16:25:19 crc kubenswrapper[4687]: I0312 16:25:19.130968 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57gjt" podStartSLOduration=24.346876129 podStartE2EDuration="27.130947123s" podCreationTimestamp="2026-03-12 16:24:52 +0000 UTC" firstStartedPulling="2026-03-12 16:25:14.526124991 +0000 UTC m=+1363.490087335" lastFinishedPulling="2026-03-12 16:25:17.310195985 +0000 UTC m=+1366.274158329" observedRunningTime="2026-03-12 16:25:19.108392618 +0000 UTC m=+1368.072354982" watchObservedRunningTime="2026-03-12 16:25:19.130947123 +0000 UTC m=+1368.094909477" Mar 12 16:25:20 crc kubenswrapper[4687]: I0312 16:25:20.077680 4687 scope.go:117] "RemoveContainer" containerID="d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54" Mar 12 16:25:20 crc kubenswrapper[4687]: E0312 16:25:20.080858 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-784df9f744-9vs8d_openstack(3641fb4f-ebfa-4d32-a37c-ca304c44ccab)\"" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" Mar 12 16:25:21 crc kubenswrapper[4687]: I0312 16:25:21.966337 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 16:25:21 crc kubenswrapper[4687]: I0312 16:25:21.966938 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.054111 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.056977 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.096988 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.097031 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.101404 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.101446 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.141455 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.168647 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.893492 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.937497 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.937552 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.977056 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lp6js"] Mar 12 16:25:22 crc kubenswrapper[4687]: I0312 16:25:22.977353 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="dnsmasq-dns" containerID="cri-o://875ab923c2128f241cde63c294be12dd7345300a7c2ecd6959292bbe51c0a207" gracePeriod=10 Mar 12 16:25:23 crc kubenswrapper[4687]: I0312 16:25:23.132643 4687 generic.go:334] "Generic (PLEG): container finished" podID="5649bea8-d269-4e13-bcf9-8e58f4cdb132" containerID="706ffe0ee0580ee5c68a799ee5b454a588dd0edbb5186508b697c381e8d06238" exitCode=0 Mar 12 16:25:23 crc kubenswrapper[4687]: I0312 16:25:23.132723 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5ptv6" event={"ID":"5649bea8-d269-4e13-bcf9-8e58f4cdb132","Type":"ContainerDied","Data":"706ffe0ee0580ee5c68a799ee5b454a588dd0edbb5186508b697c381e8d06238"} Mar 12 16:25:23 crc kubenswrapper[4687]: I0312 16:25:23.144011 4687 generic.go:334] "Generic (PLEG): container finished" podID="38be6432-e14f-401f-bb52-f0266cffb420" containerID="875ab923c2128f241cde63c294be12dd7345300a7c2ecd6959292bbe51c0a207" exitCode=0 Mar 12 16:25:23 crc kubenswrapper[4687]: I0312 16:25:23.145340 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" event={"ID":"38be6432-e14f-401f-bb52-f0266cffb420","Type":"ContainerDied","Data":"875ab923c2128f241cde63c294be12dd7345300a7c2ecd6959292bbe51c0a207"} Mar 12 16:25:23 crc kubenswrapper[4687]: I0312 16:25:23.145389 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:23 crc kubenswrapper[4687]: I0312 16:25:23.146001 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:24 crc kubenswrapper[4687]: I0312 16:25:24.035336 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:25:24 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:25:24 crc kubenswrapper[4687]: > Mar 12 16:25:24 crc kubenswrapper[4687]: I0312 16:25:24.132280 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: connect: connection refused" Mar 12 16:25:24 crc kubenswrapper[4687]: I0312 16:25:24.159982 4687 generic.go:334] "Generic (PLEG): container finished" podID="163922be-91e7-4655-8802-b92c4699bad8" containerID="8a64a5f454239ed79cc9a4dcaf04d01f5efb59cde19c9a99bca8ac98dcc26034" exitCode=0 Mar 12 16:25:24 crc kubenswrapper[4687]: I0312 16:25:24.160048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v655j" event={"ID":"163922be-91e7-4655-8802-b92c4699bad8","Type":"ContainerDied","Data":"8a64a5f454239ed79cc9a4dcaf04d01f5efb59cde19c9a99bca8ac98dcc26034"} Mar 12 16:25:24 crc kubenswrapper[4687]: I0312 16:25:24.179219 4687 generic.go:334] "Generic (PLEG): container finished" podID="8483f386-39e7-45eb-bd05-02dcdce7677f" containerID="68f9290961698fcc142d1c9d4a3d15b482223729176554885285d7fb51dafd38" exitCode=0 Mar 12 16:25:24 crc kubenswrapper[4687]: I0312 16:25:24.179307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bqj4x" event={"ID":"8483f386-39e7-45eb-bd05-02dcdce7677f","Type":"ContainerDied","Data":"68f9290961698fcc142d1c9d4a3d15b482223729176554885285d7fb51dafd38"} Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.204473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5ptv6" event={"ID":"5649bea8-d269-4e13-bcf9-8e58f4cdb132","Type":"ContainerDied","Data":"5362aa3e93684f15ac2bd4bb4dd1c04a189272de9a185d005270dca5936541b8"} Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.204517 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5362aa3e93684f15ac2bd4bb4dd1c04a189272de9a185d005270dca5936541b8" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.207143 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/1.log" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.207593 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.207608 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.214768 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5ptv6" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.255209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5649bea8-d269-4e13-bcf9-8e58f4cdb132-logs\") pod \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.255401 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-scripts\") pod \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.255425 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9zl\" (UniqueName: \"kubernetes.io/projected/5649bea8-d269-4e13-bcf9-8e58f4cdb132-kube-api-access-fb9zl\") pod \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.255526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-combined-ca-bundle\") pod \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.255619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-config-data\") pod \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\" (UID: \"5649bea8-d269-4e13-bcf9-8e58f4cdb132\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.255744 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5649bea8-d269-4e13-bcf9-8e58f4cdb132-logs" (OuterVolumeSpecName: "logs") pod "5649bea8-d269-4e13-bcf9-8e58f4cdb132" (UID: "5649bea8-d269-4e13-bcf9-8e58f4cdb132"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.256148 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5649bea8-d269-4e13-bcf9-8e58f4cdb132-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.263519 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5649bea8-d269-4e13-bcf9-8e58f4cdb132-kube-api-access-fb9zl" (OuterVolumeSpecName: "kube-api-access-fb9zl") pod "5649bea8-d269-4e13-bcf9-8e58f4cdb132" (UID: "5649bea8-d269-4e13-bcf9-8e58f4cdb132"). InnerVolumeSpecName "kube-api-access-fb9zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.264843 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-scripts" (OuterVolumeSpecName: "scripts") pod "5649bea8-d269-4e13-bcf9-8e58f4cdb132" (UID: "5649bea8-d269-4e13-bcf9-8e58f4cdb132"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.300261 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-config-data" (OuterVolumeSpecName: "config-data") pod "5649bea8-d269-4e13-bcf9-8e58f4cdb132" (UID: "5649bea8-d269-4e13-bcf9-8e58f4cdb132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.311511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5649bea8-d269-4e13-bcf9-8e58f4cdb132" (UID: "5649bea8-d269-4e13-bcf9-8e58f4cdb132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.360792 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.360823 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.360832 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5649bea8-d269-4e13-bcf9-8e58f4cdb132-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.360842 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb9zl\" (UniqueName: \"kubernetes.io/projected/5649bea8-d269-4e13-bcf9-8e58f4cdb132-kube-api-access-fb9zl\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.581479 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.783608 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcvvh\" (UniqueName: \"kubernetes.io/projected/38be6432-e14f-401f-bb52-f0266cffb420-kube-api-access-bcvvh\") pod \"38be6432-e14f-401f-bb52-f0266cffb420\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.784031 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-nb\") pod \"38be6432-e14f-401f-bb52-f0266cffb420\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.784082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-sb\") pod \"38be6432-e14f-401f-bb52-f0266cffb420\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.784208 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-svc\") pod \"38be6432-e14f-401f-bb52-f0266cffb420\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.784346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-swift-storage-0\") pod \"38be6432-e14f-401f-bb52-f0266cffb420\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.784512 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-config\") pod \"38be6432-e14f-401f-bb52-f0266cffb420\" (UID: \"38be6432-e14f-401f-bb52-f0266cffb420\") " Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.806620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38be6432-e14f-401f-bb52-f0266cffb420-kube-api-access-bcvvh" (OuterVolumeSpecName: "kube-api-access-bcvvh") pod "38be6432-e14f-401f-bb52-f0266cffb420" (UID: "38be6432-e14f-401f-bb52-f0266cffb420"). InnerVolumeSpecName "kube-api-access-bcvvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.865841 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.865829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-config" (OuterVolumeSpecName: "config") pod "38be6432-e14f-401f-bb52-f0266cffb420" (UID: "38be6432-e14f-401f-bb52-f0266cffb420"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.877002 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38be6432-e14f-401f-bb52-f0266cffb420" (UID: "38be6432-e14f-401f-bb52-f0266cffb420"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.878539 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38be6432-e14f-401f-bb52-f0266cffb420" (UID: "38be6432-e14f-401f-bb52-f0266cffb420"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.886033 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38be6432-e14f-401f-bb52-f0266cffb420" (UID: "38be6432-e14f-401f-bb52-f0266cffb420"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.900505 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcvvh\" (UniqueName: \"kubernetes.io/projected/38be6432-e14f-401f-bb52-f0266cffb420-kube-api-access-bcvvh\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.900550 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.900559 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.900569 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.900577 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.912677 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:25 crc kubenswrapper[4687]: I0312 16:25:25.926201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38be6432-e14f-401f-bb52-f0266cffb420" (UID: "38be6432-e14f-401f-bb52-f0266cffb420"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.002292 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-db-sync-config-data\") pod \"8483f386-39e7-45eb-bd05-02dcdce7677f\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.002344 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-combined-ca-bundle\") pod \"8483f386-39e7-45eb-bd05-02dcdce7677f\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.002636 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvkwp\" (UniqueName: \"kubernetes.io/projected/8483f386-39e7-45eb-bd05-02dcdce7677f-kube-api-access-rvkwp\") pod \"8483f386-39e7-45eb-bd05-02dcdce7677f\" (UID: \"8483f386-39e7-45eb-bd05-02dcdce7677f\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.003164 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38be6432-e14f-401f-bb52-f0266cffb420-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.005184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8483f386-39e7-45eb-bd05-02dcdce7677f" (UID: "8483f386-39e7-45eb-bd05-02dcdce7677f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.006493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8483f386-39e7-45eb-bd05-02dcdce7677f-kube-api-access-rvkwp" (OuterVolumeSpecName: "kube-api-access-rvkwp") pod "8483f386-39e7-45eb-bd05-02dcdce7677f" (UID: "8483f386-39e7-45eb-bd05-02dcdce7677f"). InnerVolumeSpecName "kube-api-access-rvkwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.034223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8483f386-39e7-45eb-bd05-02dcdce7677f" (UID: "8483f386-39e7-45eb-bd05-02dcdce7677f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.104943 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-combined-ca-bundle\") pod \"163922be-91e7-4655-8802-b92c4699bad8\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.105533 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-fernet-keys\") pod \"163922be-91e7-4655-8802-b92c4699bad8\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.105600 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-credential-keys\") pod \"163922be-91e7-4655-8802-b92c4699bad8\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.105729 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lflgf\" (UniqueName: \"kubernetes.io/projected/163922be-91e7-4655-8802-b92c4699bad8-kube-api-access-lflgf\") pod \"163922be-91e7-4655-8802-b92c4699bad8\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.105847 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-config-data\") pod \"163922be-91e7-4655-8802-b92c4699bad8\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.105980 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-scripts\") pod \"163922be-91e7-4655-8802-b92c4699bad8\" (UID: \"163922be-91e7-4655-8802-b92c4699bad8\") " Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.106824 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvkwp\" (UniqueName: \"kubernetes.io/projected/8483f386-39e7-45eb-bd05-02dcdce7677f-kube-api-access-rvkwp\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.106852 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.106866 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8483f386-39e7-45eb-bd05-02dcdce7677f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.109816 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "163922be-91e7-4655-8802-b92c4699bad8" (UID: "163922be-91e7-4655-8802-b92c4699bad8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.111981 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "163922be-91e7-4655-8802-b92c4699bad8" (UID: "163922be-91e7-4655-8802-b92c4699bad8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.112002 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163922be-91e7-4655-8802-b92c4699bad8-kube-api-access-lflgf" (OuterVolumeSpecName: "kube-api-access-lflgf") pod "163922be-91e7-4655-8802-b92c4699bad8" (UID: "163922be-91e7-4655-8802-b92c4699bad8"). InnerVolumeSpecName "kube-api-access-lflgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.112745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-scripts" (OuterVolumeSpecName: "scripts") pod "163922be-91e7-4655-8802-b92c4699bad8" (UID: "163922be-91e7-4655-8802-b92c4699bad8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.147844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "163922be-91e7-4655-8802-b92c4699bad8" (UID: "163922be-91e7-4655-8802-b92c4699bad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.155670 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-config-data" (OuterVolumeSpecName: "config-data") pod "163922be-91e7-4655-8802-b92c4699bad8" (UID: "163922be-91e7-4655-8802-b92c4699bad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.209274 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.209306 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.209315 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.209327 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.209334 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/163922be-91e7-4655-8802-b92c4699bad8-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.209342 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lflgf\" (UniqueName: \"kubernetes.io/projected/163922be-91e7-4655-8802-b92c4699bad8-kube-api-access-lflgf\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.267748 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bqj4x" event={"ID":"8483f386-39e7-45eb-bd05-02dcdce7677f","Type":"ContainerDied","Data":"13f9e9db5295b0c52bc8fb5421ad597996224693802281fc3774bdfeee202438"} Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.267814 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f9e9db5295b0c52bc8fb5421ad597996224693802281fc3774bdfeee202438" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.267905 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bqj4x" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.291200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" event={"ID":"38be6432-e14f-401f-bb52-f0266cffb420","Type":"ContainerDied","Data":"be853e3043cbc74e06d9a1ac5ea62d58a17127bcda3369ad4a97f3960b87af9c"} Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.291258 4687 scope.go:117] "RemoveContainer" containerID="875ab923c2128f241cde63c294be12dd7345300a7c2ecd6959292bbe51c0a207" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.291455 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lp6js" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.321249 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerStarted","Data":"f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb"} Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.337466 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-684c74d595-mgzvt"] Mar 12 16:25:26 crc kubenswrapper[4687]: E0312 16:25:26.338146 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="dnsmasq-dns" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.338265 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="dnsmasq-dns" Mar 12 16:25:26 crc kubenswrapper[4687]: E0312 16:25:26.338329 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8483f386-39e7-45eb-bd05-02dcdce7677f" containerName="barbican-db-sync" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.338408 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8483f386-39e7-45eb-bd05-02dcdce7677f" containerName="barbican-db-sync" Mar 12 16:25:26 crc kubenswrapper[4687]: E0312 16:25:26.338488 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="init" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.338556 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="init" Mar 12 16:25:26 crc kubenswrapper[4687]: E0312 16:25:26.338624 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5649bea8-d269-4e13-bcf9-8e58f4cdb132" containerName="placement-db-sync" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.338676 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5649bea8-d269-4e13-bcf9-8e58f4cdb132" containerName="placement-db-sync" Mar 12 16:25:26 crc kubenswrapper[4687]: E0312 16:25:26.338744 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163922be-91e7-4655-8802-b92c4699bad8" containerName="keystone-bootstrap" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.338797 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="163922be-91e7-4655-8802-b92c4699bad8" containerName="keystone-bootstrap" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.339024 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5649bea8-d269-4e13-bcf9-8e58f4cdb132" containerName="placement-db-sync" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.339094 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8483f386-39e7-45eb-bd05-02dcdce7677f" containerName="barbican-db-sync" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.339164 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="163922be-91e7-4655-8802-b92c4699bad8" containerName="keystone-bootstrap" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.339226 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="38be6432-e14f-401f-bb52-f0266cffb420" containerName="dnsmasq-dns" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.340012 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.344170 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-684c74d595-mgzvt"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.347856 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.348037 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.387936 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5ptv6" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.388148 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v655j" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.388743 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v655j" event={"ID":"163922be-91e7-4655-8802-b92c4699bad8","Type":"ContainerDied","Data":"0b29b1d15fc32e396510e471c22fbc8da9b3a5e02b822cba9e0547bcf4511c21"} Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.388774 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b29b1d15fc32e396510e471c22fbc8da9b3a5e02b822cba9e0547bcf4511c21" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.461554 4687 scope.go:117] "RemoveContainer" containerID="33e01f46614fba73f0d7544164e28ae7f9204961a15a70386e58bfe7ec6babca" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.468559 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d44dff674-gspmf"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.470769 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.480312 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.480504 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.480547 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jx2x2" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.480957 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.489725 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-fernet-keys\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514649 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmsf\" (UniqueName: \"kubernetes.io/projected/ca335c8b-6106-464d-ae4f-9efbed783816-kube-api-access-4zmsf\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-combined-ca-bundle\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514730 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-internal-tls-certs\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514799 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-config-data\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-scripts\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514894 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-credential-keys\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.514995 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-public-tls-certs\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.534729 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d44dff674-gspmf"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.579136 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lp6js"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.609182 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lp6js"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.620801 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-internal-tls-certs\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.620844 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-config-data\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.620943 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-scripts\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.620969 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-config-data\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.621015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4nwq\" (UniqueName: \"kubernetes.io/projected/66165239-6c53-469e-97c0-fbcec87f22d3-kube-api-access-h4nwq\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.621066 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-scripts\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.632695 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-credential-keys\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.632835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-internal-tls-certs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.632870 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-public-tls-certs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.632909 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-combined-ca-bundle\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.632981 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-public-tls-certs\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.633013 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66165239-6c53-469e-97c0-fbcec87f22d3-logs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.633103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-fernet-keys\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.633121 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmsf\" (UniqueName: \"kubernetes.io/projected/ca335c8b-6106-464d-ae4f-9efbed783816-kube-api-access-4zmsf\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.633251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-combined-ca-bundle\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.650334 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7cf7bfd84c-qqlwf"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.659732 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.664088 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.664281 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hsmbz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.664352 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-config-data\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.664546 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.665122 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-internal-tls-certs\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.666005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-fernet-keys\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.666479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-combined-ca-bundle\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.668631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-public-tls-certs\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.668921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-scripts\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.671222 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmsf\" (UniqueName: \"kubernetes.io/projected/ca335c8b-6106-464d-ae4f-9efbed783816-kube-api-access-4zmsf\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.677599 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca335c8b-6106-464d-ae4f-9efbed783816-credential-keys\") pod \"keystone-684c74d595-mgzvt\" (UID: \"ca335c8b-6106-464d-ae4f-9efbed783816\") " pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.723134 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gs2gh"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.725321 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66165239-6c53-469e-97c0-fbcec87f22d3-logs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-config-data\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-scripts\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nwq\" (UniqueName: \"kubernetes.io/projected/66165239-6c53-469e-97c0-fbcec87f22d3-kube-api-access-h4nwq\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-internal-tls-certs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740892 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-public-tls-certs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.740919 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-combined-ca-bundle\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.741594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66165239-6c53-469e-97c0-fbcec87f22d3-logs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.747650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-internal-tls-certs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.751582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-config-data\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.751955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-combined-ca-bundle\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.760315 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-scripts\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.760925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-public-tls-certs\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.770658 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4nwq\" (UniqueName: \"kubernetes.io/projected/66165239-6c53-469e-97c0-fbcec87f22d3-kube-api-access-h4nwq\") pod \"placement-7d44dff674-gspmf\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.772006 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57d99b55cd-9f9vz"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.773781 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.778191 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.791233 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57d99b55cd-9f9vz"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.793786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.819768 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7cf7bfd84c-qqlwf"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.836555 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.842782 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lqmz\" (UniqueName: \"kubernetes.io/projected/920e5784-4b51-47ac-838d-4d216d80f128-kube-api-access-9lqmz\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.842834 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.842935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgv8\" (UniqueName: \"kubernetes.io/projected/1fb6c2ae-414d-4a45-81f0-4505469a7143-kube-api-access-fxgv8\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-config-data\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-config\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843156 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-svc\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-config-data-custom\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843222 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-combined-ca-bundle\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.843315 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fb6c2ae-414d-4a45-81f0-4505469a7143-logs\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.845032 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gs2gh"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945691 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-logs\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945745 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-svc\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-config-data-custom\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945802 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-config-data-custom\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945826 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-combined-ca-bundle\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945852 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945869 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fb6c2ae-414d-4a45-81f0-4505469a7143-logs\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945891 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-combined-ca-bundle\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lqmz\" (UniqueName: \"kubernetes.io/projected/920e5784-4b51-47ac-838d-4d216d80f128-kube-api-access-9lqmz\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.945966 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgv8\" (UniqueName: \"kubernetes.io/projected/1fb6c2ae-414d-4a45-81f0-4505469a7143-kube-api-access-fxgv8\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.946041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-config-data\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.946057 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-config\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.946084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsh8q\" (UniqueName: \"kubernetes.io/projected/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-kube-api-access-lsh8q\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.946114 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.946136 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-config-data\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.947075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-svc\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.948733 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.949775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-config\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.954454 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.955018 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.955493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fb6c2ae-414d-4a45-81f0-4505469a7143-logs\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.956194 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-config-data-custom\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.958475 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-config-data\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.960865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6c2ae-414d-4a45-81f0-4505469a7143-combined-ca-bundle\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.969058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgv8\" (UniqueName: \"kubernetes.io/projected/1fb6c2ae-414d-4a45-81f0-4505469a7143-kube-api-access-fxgv8\") pod \"barbican-worker-7cf7bfd84c-qqlwf\" (UID: \"1fb6c2ae-414d-4a45-81f0-4505469a7143\") " pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.983188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lqmz\" (UniqueName: \"kubernetes.io/projected/920e5784-4b51-47ac-838d-4d216d80f128-kube-api-access-9lqmz\") pod \"dnsmasq-dns-85ff748b95-gs2gh\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.990123 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bcccf94d8-svjfj"] Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.992261 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.997707 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 16:25:26 crc kubenswrapper[4687]: I0312 16:25:26.997821 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.001463 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.001598 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bcccf94d8-svjfj"] Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.012491 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.019557 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.045856 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.045937 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.061657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-combined-ca-bundle\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.067314 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.067984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsh8q\" (UniqueName: \"kubernetes.io/projected/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-kube-api-access-lsh8q\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.068148 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-config-data\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.069153 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-logs\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.069261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-config-data-custom\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.070744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-logs\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.075978 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-config-data\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.079346 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-combined-ca-bundle\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.081248 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84c558f4db-5rcnd"] Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.093982 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.105921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsh8q\" (UniqueName: \"kubernetes.io/projected/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-kube-api-access-lsh8q\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.114999 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c-config-data-custom\") pod \"barbican-keystone-listener-57d99b55cd-9f9vz\" (UID: \"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c\") " pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.155002 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84c558f4db-5rcnd"] Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.157219 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.172753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59c8500-2bca-4d5a-b644-354f3b3b453e-logs\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.181894 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data-custom\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.182143 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6fl\" (UniqueName: \"kubernetes.io/projected/b59c8500-2bca-4d5a-b644-354f3b3b453e-kube-api-access-ls6fl\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.182498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-combined-ca-bundle\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.182757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285075 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285129 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-config-data\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59c8500-2bca-4d5a-b644-354f3b3b453e-logs\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285178 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data-custom\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285243 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njltk\" (UniqueName: \"kubernetes.io/projected/8a158c0a-38a3-4fd1-b759-302a9c695434-kube-api-access-njltk\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285274 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a158c0a-38a3-4fd1-b759-302a9c695434-logs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-internal-tls-certs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6fl\" (UniqueName: \"kubernetes.io/projected/b59c8500-2bca-4d5a-b644-354f3b3b453e-kube-api-access-ls6fl\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-combined-ca-bundle\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-public-tls-certs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-combined-ca-bundle\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.285546 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-scripts\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.295812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59c8500-2bca-4d5a-b644-354f3b3b453e-logs\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.298195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.301531 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-combined-ca-bundle\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.308079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data-custom\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.344481 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6fl\" (UniqueName: \"kubernetes.io/projected/b59c8500-2bca-4d5a-b644-354f3b3b453e-kube-api-access-ls6fl\") pod \"barbican-api-7bcccf94d8-svjfj\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.387670 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-config-data\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.391789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njltk\" (UniqueName: \"kubernetes.io/projected/8a158c0a-38a3-4fd1-b759-302a9c695434-kube-api-access-njltk\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.391818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a158c0a-38a3-4fd1-b759-302a9c695434-logs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.391846 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-internal-tls-certs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.391901 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-combined-ca-bundle\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.391941 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-public-tls-certs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.392006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-scripts\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.393129 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a158c0a-38a3-4fd1-b759-302a9c695434-logs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.405479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-combined-ca-bundle\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.408849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-internal-tls-certs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.409214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-scripts\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.410033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-config-data\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.421239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njltk\" (UniqueName: \"kubernetes.io/projected/8a158c0a-38a3-4fd1-b759-302a9c695434-kube-api-access-njltk\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.436418 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zd9d7" event={"ID":"9846a398-b104-418e-ace9-6eb022ddacbb","Type":"ContainerStarted","Data":"2613474324dd4b9c6c1b6f5102decd6086eff68bcade87f29fbc54684b948e08"} Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.439004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a158c0a-38a3-4fd1-b759-302a9c695434-public-tls-certs\") pod \"placement-84c558f4db-5rcnd\" (UID: \"8a158c0a-38a3-4fd1-b759-302a9c695434\") " pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.475848 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zd9d7" podStartSLOduration=2.999182066 podStartE2EDuration="44.475828313s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="2026-03-12 16:24:44.721076633 +0000 UTC m=+1333.685038977" lastFinishedPulling="2026-03-12 16:25:26.19772288 +0000 UTC m=+1375.161685224" observedRunningTime="2026-03-12 16:25:27.474889317 +0000 UTC m=+1376.438851661" watchObservedRunningTime="2026-03-12 16:25:27.475828313 +0000 UTC m=+1376.439790667" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.494318 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.710369 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.800287 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38be6432-e14f-401f-bb52-f0266cffb420" path="/var/lib/kubelet/pods/38be6432-e14f-401f-bb52-f0266cffb420/volumes" Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.890175 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d44dff674-gspmf"] Mar 12 16:25:27 crc kubenswrapper[4687]: I0312 16:25:27.910214 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-684c74d595-mgzvt"] Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.365705 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gs2gh"] Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.437441 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7cf7bfd84c-qqlwf"] Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.513930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-684c74d595-mgzvt" event={"ID":"ca335c8b-6106-464d-ae4f-9efbed783816","Type":"ContainerStarted","Data":"bf7e0194ab43b747d49b1df09c192ec6e1f590921b2f6c99105ce0775948f930"} Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.544072 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57d99b55cd-9f9vz"] Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.552752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d44dff674-gspmf" event={"ID":"66165239-6c53-469e-97c0-fbcec87f22d3","Type":"ContainerStarted","Data":"813194da63759f0f38b10ff211392d310905ab3b48229dd0e57c77676846f0df"} Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.721125 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bcccf94d8-svjfj"] Mar 12 16:25:28 crc kubenswrapper[4687]: W0312 16:25:28.751990 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59c8500_2bca_4d5a_b644_354f3b3b453e.slice/crio-312e03fb9b1f5f857fece5cb28a803c2f6e8a0ba3c55e3085b433c52049050a3 WatchSource:0}: Error finding container 312e03fb9b1f5f857fece5cb28a803c2f6e8a0ba3c55e3085b433c52049050a3: Status 404 returned error can't find the container with id 312e03fb9b1f5f857fece5cb28a803c2f6e8a0ba3c55e3085b433c52049050a3 Mar 12 16:25:28 crc kubenswrapper[4687]: I0312 16:25:28.945307 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84c558f4db-5rcnd"] Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.581031 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" event={"ID":"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c","Type":"ContainerStarted","Data":"1bc08b06dda6760b80d74f02e93eb17c658aab13c852ee81e0a1f87343c38c86"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.592473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" event={"ID":"1fb6c2ae-414d-4a45-81f0-4505469a7143","Type":"ContainerStarted","Data":"5655032d0a705bb0243918e666ec0a3f4f49bad9e086491a4a32e40752a5abe2"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.604192 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84c558f4db-5rcnd" event={"ID":"8a158c0a-38a3-4fd1-b759-302a9c695434","Type":"ContainerStarted","Data":"f8a04f976d0ec92249b641851458e2fda98c81721d4717a8e49259fdd594feda"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.616403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcccf94d8-svjfj" event={"ID":"b59c8500-2bca-4d5a-b644-354f3b3b453e","Type":"ContainerStarted","Data":"5fd48ba724bb0d5cf43f0a408b6c888cddf727ea9c4306d966b43d3bde0fceb8"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.616455 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcccf94d8-svjfj" event={"ID":"b59c8500-2bca-4d5a-b644-354f3b3b453e","Type":"ContainerStarted","Data":"312e03fb9b1f5f857fece5cb28a803c2f6e8a0ba3c55e3085b433c52049050a3"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.622333 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-684c74d595-mgzvt" event={"ID":"ca335c8b-6106-464d-ae4f-9efbed783816","Type":"ContainerStarted","Data":"2a10296dfe5fa312acd1481ed2b3767e97b4034904dd3aa28658d60cb1d1fa65"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.622594 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.628600 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d44dff674-gspmf" event={"ID":"66165239-6c53-469e-97c0-fbcec87f22d3","Type":"ContainerStarted","Data":"bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.642894 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9mcqx" event={"ID":"de14ef1d-d5a7-490a-a522-8d4cf39989b4","Type":"ContainerStarted","Data":"085403b99998b377fc055c062b9016db054f24d6ce2a5cb5bf108a44a2901ffc"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.658249 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-684c74d595-mgzvt" podStartSLOduration=3.6582265400000002 podStartE2EDuration="3.65822654s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:29.641084761 +0000 UTC m=+1378.605047105" watchObservedRunningTime="2026-03-12 16:25:29.65822654 +0000 UTC m=+1378.622188884" Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.669831 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9mcqx" podStartSLOduration=4.370375868 podStartE2EDuration="46.669810815s" podCreationTimestamp="2026-03-12 16:24:43 +0000 UTC" firstStartedPulling="2026-03-12 16:24:45.175261225 +0000 UTC m=+1334.139223569" lastFinishedPulling="2026-03-12 16:25:27.474696172 +0000 UTC m=+1376.438658516" observedRunningTime="2026-03-12 16:25:29.665900158 +0000 UTC m=+1378.629862512" watchObservedRunningTime="2026-03-12 16:25:29.669810815 +0000 UTC m=+1378.633773149" Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.702905 4687 generic.go:334] "Generic (PLEG): container finished" podID="920e5784-4b51-47ac-838d-4d216d80f128" containerID="ed33309c39a0eab3f783c65994e3839a239e5930a896612576502559af4a8275" exitCode=0 Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.702980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" event={"ID":"920e5784-4b51-47ac-838d-4d216d80f128","Type":"ContainerDied","Data":"ed33309c39a0eab3f783c65994e3839a239e5930a896612576502559af4a8275"} Mar 12 16:25:29 crc kubenswrapper[4687]: I0312 16:25:29.703013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" event={"ID":"920e5784-4b51-47ac-838d-4d216d80f128","Type":"ContainerStarted","Data":"84cf75fb5a90e17541ab86af27f84f4fcec1dcf204ab81915cb2f344b767ff48"} Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.599548 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6844d7659d-l5x7z"] Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.602429 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.604742 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.605486 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.640471 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6844d7659d-l5x7z"] Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.727950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-config-data-custom\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.728014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-combined-ca-bundle\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.728056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215e5a6b-7a05-4522-8f4c-7e0f82634490-logs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.728080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-config-data\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.728144 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-internal-tls-certs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.728196 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwb6q\" (UniqueName: \"kubernetes.io/projected/215e5a6b-7a05-4522-8f4c-7e0f82634490-kube-api-access-dwb6q\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.728224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-public-tls-certs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.745487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" event={"ID":"920e5784-4b51-47ac-838d-4d216d80f128","Type":"ContainerStarted","Data":"38ef10712bcec9d8a116e141b4469fc684084441857c56a6e295345167f3f09f"} Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.745599 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.754895 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84c558f4db-5rcnd" event={"ID":"8a158c0a-38a3-4fd1-b759-302a9c695434","Type":"ContainerStarted","Data":"b45d2f45b7b235b5184ba49247d8565cfdb43a21a53f956af4b53b510bb4a048"} Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.754937 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84c558f4db-5rcnd" event={"ID":"8a158c0a-38a3-4fd1-b759-302a9c695434","Type":"ContainerStarted","Data":"631facd5b332e24d74d6cf7275d092c0970dd7c2b588395999bc889937bc3fd9"} Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.756068 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.765529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcccf94d8-svjfj" event={"ID":"b59c8500-2bca-4d5a-b644-354f3b3b453e","Type":"ContainerStarted","Data":"bc0025b4605a3b64d76332cd5baac16b1e4833b5bc6d7a9c0e77b0fa085d32c4"} Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.766491 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.766521 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.778151 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d44dff674-gspmf" event={"ID":"66165239-6c53-469e-97c0-fbcec87f22d3","Type":"ContainerStarted","Data":"db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210"} Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.778198 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.778222 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.785611 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" podStartSLOduration=4.78559406 podStartE2EDuration="4.78559406s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:30.775786212 +0000 UTC m=+1379.739748556" watchObservedRunningTime="2026-03-12 16:25:30.78559406 +0000 UTC m=+1379.749556394" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.803567 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bcccf94d8-svjfj" podStartSLOduration=4.80354726 podStartE2EDuration="4.80354726s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:30.797823284 +0000 UTC m=+1379.761785618" watchObservedRunningTime="2026-03-12 16:25:30.80354726 +0000 UTC m=+1379.767509604" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.825180 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84c558f4db-5rcnd" podStartSLOduration=4.825157549 podStartE2EDuration="4.825157549s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:30.817199881 +0000 UTC m=+1379.781162235" watchObservedRunningTime="2026-03-12 16:25:30.825157549 +0000 UTC m=+1379.789119893" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.834358 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-internal-tls-certs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.834643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwb6q\" (UniqueName: \"kubernetes.io/projected/215e5a6b-7a05-4522-8f4c-7e0f82634490-kube-api-access-dwb6q\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.834687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-public-tls-certs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.834916 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-config-data-custom\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.834976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-combined-ca-bundle\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.835009 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215e5a6b-7a05-4522-8f4c-7e0f82634490-logs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.835035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-config-data\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.838019 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215e5a6b-7a05-4522-8f4c-7e0f82634490-logs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.843408 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-combined-ca-bundle\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.846491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-config-data\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.846775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-public-tls-certs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.846858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-internal-tls-certs\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.858241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215e5a6b-7a05-4522-8f4c-7e0f82634490-config-data-custom\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.868758 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d44dff674-gspmf" podStartSLOduration=4.868737868 podStartE2EDuration="4.868737868s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:30.858205521 +0000 UTC m=+1379.822167865" watchObservedRunningTime="2026-03-12 16:25:30.868737868 +0000 UTC m=+1379.832700212" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.870442 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwb6q\" (UniqueName: \"kubernetes.io/projected/215e5a6b-7a05-4522-8f4c-7e0f82634490-kube-api-access-dwb6q\") pod \"barbican-api-6844d7659d-l5x7z\" (UID: \"215e5a6b-7a05-4522-8f4c-7e0f82634490\") " pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:30 crc kubenswrapper[4687]: I0312 16:25:30.945478 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:31 crc kubenswrapper[4687]: I0312 16:25:31.841942 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.006924 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6844d7659d-l5x7z"] Mar 12 16:25:33 crc kubenswrapper[4687]: W0312 16:25:33.030807 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215e5a6b_7a05_4522_8f4c_7e0f82634490.slice/crio-1df560f4834e085d4e43589ba6b4310494260389310757126776a1977d9230e2 WatchSource:0}: Error finding container 1df560f4834e085d4e43589ba6b4310494260389310757126776a1977d9230e2: Status 404 returned error can't find the container with id 1df560f4834e085d4e43589ba6b4310494260389310757126776a1977d9230e2 Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.735941 4687 scope.go:117] "RemoveContainer" containerID="d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54" Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.878124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6844d7659d-l5x7z" event={"ID":"215e5a6b-7a05-4522-8f4c-7e0f82634490","Type":"ContainerStarted","Data":"162124ad91091ccef23fcd01d45b307bd998b1cbcddf7c85db09f7af04c83a91"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.878161 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6844d7659d-l5x7z" event={"ID":"215e5a6b-7a05-4522-8f4c-7e0f82634490","Type":"ContainerStarted","Data":"3ea60832f12073f1a8cd75e6cdfaa6482ca2cb3bb31876f6c45461038270404a"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.878171 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6844d7659d-l5x7z" event={"ID":"215e5a6b-7a05-4522-8f4c-7e0f82634490","Type":"ContainerStarted","Data":"1df560f4834e085d4e43589ba6b4310494260389310757126776a1977d9230e2"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.878935 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.878997 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.888450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" event={"ID":"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c","Type":"ContainerStarted","Data":"4610b81c903156798fc0f32f2a62f849b30a62fbbad7c66f8de5e6116e5d23a4"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.888968 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" event={"ID":"037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c","Type":"ContainerStarted","Data":"cf8cb8a2999efc0a7e96df70f3df0d5cb94b0d9521e34753a659bc05dea4996b"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.900225 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6844d7659d-l5x7z" podStartSLOduration=3.900205351 podStartE2EDuration="3.900205351s" podCreationTimestamp="2026-03-12 16:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:33.898854424 +0000 UTC m=+1382.862816778" watchObservedRunningTime="2026-03-12 16:25:33.900205351 +0000 UTC m=+1382.864167715" Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.910178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" event={"ID":"1fb6c2ae-414d-4a45-81f0-4505469a7143","Type":"ContainerStarted","Data":"7424d33db22fc80954d21778343e4a3f2f2808cc486bb3321fb7576c64030db8"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.910524 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" event={"ID":"1fb6c2ae-414d-4a45-81f0-4505469a7143","Type":"ContainerStarted","Data":"47aa67e277b4c0b5f29a89e9250582a62eacf2194f6bbcf0a27914986b52cee1"} Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.922933 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57d99b55cd-9f9vz" podStartSLOduration=4.088525219 podStartE2EDuration="7.922915171s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="2026-03-12 16:25:28.631626098 +0000 UTC m=+1377.595588442" lastFinishedPulling="2026-03-12 16:25:32.46601605 +0000 UTC m=+1381.429978394" observedRunningTime="2026-03-12 16:25:33.9192187 +0000 UTC m=+1382.883181044" watchObservedRunningTime="2026-03-12 16:25:33.922915171 +0000 UTC m=+1382.886877515" Mar 12 16:25:33 crc kubenswrapper[4687]: I0312 16:25:33.967016 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7cf7bfd84c-qqlwf" podStartSLOduration=4.0548021 podStartE2EDuration="7.966996854s" podCreationTimestamp="2026-03-12 16:25:26 +0000 UTC" firstStartedPulling="2026-03-12 16:25:28.553418755 +0000 UTC m=+1377.517381099" lastFinishedPulling="2026-03-12 16:25:32.465613509 +0000 UTC m=+1381.429575853" observedRunningTime="2026-03-12 16:25:33.944952582 +0000 UTC m=+1382.908914956" watchObservedRunningTime="2026-03-12 16:25:33.966996854 +0000 UTC m=+1382.930959198" Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.045051 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:25:34 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:25:34 crc kubenswrapper[4687]: > Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.938267 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/2.log" Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.940067 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/1.log" Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.940459 4687 generic.go:334] "Generic (PLEG): container finished" podID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" exitCode=1 Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.941574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerDied","Data":"d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d"} Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.941612 4687 scope.go:117] "RemoveContainer" containerID="d4e5e55eb7e36419ede0a69c0e52a8adf492ae13c125a8dafd93347d9e732b54" Mar 12 16:25:34 crc kubenswrapper[4687]: I0312 16:25:34.943524 4687 scope.go:117] "RemoveContainer" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" Mar 12 16:25:34 crc kubenswrapper[4687]: E0312 16:25:34.945271 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-784df9f744-9vs8d_openstack(3641fb4f-ebfa-4d32-a37c-ca304c44ccab)\"" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.069527 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.164267 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c95nn"] Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.164522 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="dnsmasq-dns" containerID="cri-o://6b55d40ab435020e60158b8ab39b028d06aaa300834b855272a05efefc836b8e" gracePeriod=10 Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.892961 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.978820 4687 generic.go:334] "Generic (PLEG): container finished" podID="c15f16c6-ca65-49db-931f-d62df73da465" containerID="6b55d40ab435020e60158b8ab39b028d06aaa300834b855272a05efefc836b8e" exitCode=0 Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.978899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" event={"ID":"c15f16c6-ca65-49db-931f-d62df73da465","Type":"ContainerDied","Data":"6b55d40ab435020e60158b8ab39b028d06aaa300834b855272a05efefc836b8e"} Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.983524 4687 generic.go:334] "Generic (PLEG): container finished" podID="9846a398-b104-418e-ace9-6eb022ddacbb" containerID="2613474324dd4b9c6c1b6f5102decd6086eff68bcade87f29fbc54684b948e08" exitCode=0 Mar 12 16:25:37 crc kubenswrapper[4687]: I0312 16:25:37.983557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zd9d7" event={"ID":"9846a398-b104-418e-ace9-6eb022ddacbb","Type":"ContainerDied","Data":"2613474324dd4b9c6c1b6f5102decd6086eff68bcade87f29fbc54684b948e08"} Mar 12 16:25:39 crc kubenswrapper[4687]: I0312 16:25:39.084662 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:39 crc kubenswrapper[4687]: I0312 16:25:39.112902 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:39 crc kubenswrapper[4687]: I0312 16:25:39.938870 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zd9d7" Mar 12 16:25:40 crc kubenswrapper[4687]: E0312 16:25:40.016187 4687 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.038152 4687 generic.go:334] "Generic (PLEG): container finished" podID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" containerID="085403b99998b377fc055c062b9016db054f24d6ce2a5cb5bf108a44a2901ffc" exitCode=0 Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.038259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9mcqx" event={"ID":"de14ef1d-d5a7-490a-a522-8d4cf39989b4","Type":"ContainerDied","Data":"085403b99998b377fc055c062b9016db054f24d6ce2a5cb5bf108a44a2901ffc"} Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.049272 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zd9d7" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.049498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zd9d7" event={"ID":"9846a398-b104-418e-ace9-6eb022ddacbb","Type":"ContainerDied","Data":"a2ea8af0b437ccee7ebcbd58e37909626980ec5cf364241c9de4aa41010091c1"} Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.049523 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ea8af0b437ccee7ebcbd58e37909626980ec5cf364241c9de4aa41010091c1" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.074088 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-combined-ca-bundle\") pod \"9846a398-b104-418e-ace9-6eb022ddacbb\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.074346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-config-data\") pod \"9846a398-b104-418e-ace9-6eb022ddacbb\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.074526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdghq\" (UniqueName: \"kubernetes.io/projected/9846a398-b104-418e-ace9-6eb022ddacbb-kube-api-access-tdghq\") pod \"9846a398-b104-418e-ace9-6eb022ddacbb\" (UID: \"9846a398-b104-418e-ace9-6eb022ddacbb\") " Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.098079 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9846a398-b104-418e-ace9-6eb022ddacbb-kube-api-access-tdghq" (OuterVolumeSpecName: "kube-api-access-tdghq") pod "9846a398-b104-418e-ace9-6eb022ddacbb" (UID: "9846a398-b104-418e-ace9-6eb022ddacbb"). InnerVolumeSpecName "kube-api-access-tdghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.181080 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdghq\" (UniqueName: \"kubernetes.io/projected/9846a398-b104-418e-ace9-6eb022ddacbb-kube-api-access-tdghq\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.200124 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9846a398-b104-418e-ace9-6eb022ddacbb" (UID: "9846a398-b104-418e-ace9-6eb022ddacbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.202857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-config-data" (OuterVolumeSpecName: "config-data") pod "9846a398-b104-418e-ace9-6eb022ddacbb" (UID: "9846a398-b104-418e-ace9-6eb022ddacbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.283063 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:40 crc kubenswrapper[4687]: I0312 16:25:40.283104 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9846a398-b104-418e-ace9-6eb022ddacbb-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.088032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" event={"ID":"c15f16c6-ca65-49db-931f-d62df73da465","Type":"ContainerDied","Data":"1c0184e0506ef8e6f744654f7fd39ace055e92512595d5648a915c524840939c"} Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.088319 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0184e0506ef8e6f744654f7fd39ace055e92512595d5648a915c524840939c" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.101374 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/2.log" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.157351 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.309681 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-swift-storage-0\") pod \"c15f16c6-ca65-49db-931f-d62df73da465\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.310007 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-sb\") pod \"c15f16c6-ca65-49db-931f-d62df73da465\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.310085 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-nb\") pod \"c15f16c6-ca65-49db-931f-d62df73da465\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.310105 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrng\" (UniqueName: \"kubernetes.io/projected/c15f16c6-ca65-49db-931f-d62df73da465-kube-api-access-mnrng\") pod \"c15f16c6-ca65-49db-931f-d62df73da465\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.310147 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-config\") pod \"c15f16c6-ca65-49db-931f-d62df73da465\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.310750 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-svc\") pod \"c15f16c6-ca65-49db-931f-d62df73da465\" (UID: \"c15f16c6-ca65-49db-931f-d62df73da465\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.349671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15f16c6-ca65-49db-931f-d62df73da465-kube-api-access-mnrng" (OuterVolumeSpecName: "kube-api-access-mnrng") pod "c15f16c6-ca65-49db-931f-d62df73da465" (UID: "c15f16c6-ca65-49db-931f-d62df73da465"). InnerVolumeSpecName "kube-api-access-mnrng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.414114 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrng\" (UniqueName: \"kubernetes.io/projected/c15f16c6-ca65-49db-931f-d62df73da465-kube-api-access-mnrng\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: E0312 16:25:41.578046 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.615080 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.618515 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c15f16c6-ca65-49db-931f-d62df73da465" (UID: "c15f16c6-ca65-49db-931f-d62df73da465"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.622959 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.672721 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-config" (OuterVolumeSpecName: "config") pod "c15f16c6-ca65-49db-931f-d62df73da465" (UID: "c15f16c6-ca65-49db-931f-d62df73da465"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.673957 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c15f16c6-ca65-49db-931f-d62df73da465" (UID: "c15f16c6-ca65-49db-931f-d62df73da465"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.697521 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c15f16c6-ca65-49db-931f-d62df73da465" (UID: "c15f16c6-ca65-49db-931f-d62df73da465"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.698681 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c15f16c6-ca65-49db-931f-d62df73da465" (UID: "c15f16c6-ca65-49db-931f-d62df73da465"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.723982 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-config-data\") pod \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724090 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-db-sync-config-data\") pod \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724121 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-scripts\") pod \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724217 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de14ef1d-d5a7-490a-a522-8d4cf39989b4-etc-machine-id\") pod \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724244 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-combined-ca-bundle\") pod \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dznbs\" (UniqueName: \"kubernetes.io/projected/de14ef1d-d5a7-490a-a522-8d4cf39989b4-kube-api-access-dznbs\") pod \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\" (UID: \"de14ef1d-d5a7-490a-a522-8d4cf39989b4\") " Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724858 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724874 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724885 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.724894 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15f16c6-ca65-49db-931f-d62df73da465-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.725545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de14ef1d-d5a7-490a-a522-8d4cf39989b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "de14ef1d-d5a7-490a-a522-8d4cf39989b4" (UID: "de14ef1d-d5a7-490a-a522-8d4cf39989b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.728553 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-scripts" (OuterVolumeSpecName: "scripts") pod "de14ef1d-d5a7-490a-a522-8d4cf39989b4" (UID: "de14ef1d-d5a7-490a-a522-8d4cf39989b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.728844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de14ef1d-d5a7-490a-a522-8d4cf39989b4-kube-api-access-dznbs" (OuterVolumeSpecName: "kube-api-access-dznbs") pod "de14ef1d-d5a7-490a-a522-8d4cf39989b4" (UID: "de14ef1d-d5a7-490a-a522-8d4cf39989b4"). InnerVolumeSpecName "kube-api-access-dznbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.728983 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "de14ef1d-d5a7-490a-a522-8d4cf39989b4" (UID: "de14ef1d-d5a7-490a-a522-8d4cf39989b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.770119 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de14ef1d-d5a7-490a-a522-8d4cf39989b4" (UID: "de14ef1d-d5a7-490a-a522-8d4cf39989b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.800865 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-config-data" (OuterVolumeSpecName: "config-data") pod "de14ef1d-d5a7-490a-a522-8d4cf39989b4" (UID: "de14ef1d-d5a7-490a-a522-8d4cf39989b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.826853 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.826888 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.826899 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.826908 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de14ef1d-d5a7-490a-a522-8d4cf39989b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.826916 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de14ef1d-d5a7-490a-a522-8d4cf39989b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:41 crc kubenswrapper[4687]: I0312 16:25:41.826924 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dznbs\" (UniqueName: \"kubernetes.io/projected/de14ef1d-d5a7-490a-a522-8d4cf39989b4-kube-api-access-dznbs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.118554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9mcqx" event={"ID":"de14ef1d-d5a7-490a-a522-8d4cf39989b4","Type":"ContainerDied","Data":"03f76ac147ab1adec807c9c7f2752d3672c04ff238cfe4d0297a20b671fd68ca"} Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.118590 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f76ac147ab1adec807c9c7f2752d3672c04ff238cfe4d0297a20b671fd68ca" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.118730 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9mcqx" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.122005 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c95nn" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.123197 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="ceilometer-notification-agent" containerID="cri-o://f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1" gracePeriod=30 Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.123476 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerStarted","Data":"1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d"} Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.123827 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.123898 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="proxy-httpd" containerID="cri-o://1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d" gracePeriod=30 Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.123981 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="sg-core" containerID="cri-o://f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb" gracePeriod=30 Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.199448 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c95nn"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.208802 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c95nn"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.567128 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:42 crc kubenswrapper[4687]: E0312 16:25:42.569069 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="dnsmasq-dns" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.569191 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="dnsmasq-dns" Mar 12 16:25:42 crc kubenswrapper[4687]: E0312 16:25:42.569278 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9846a398-b104-418e-ace9-6eb022ddacbb" containerName="heat-db-sync" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.569350 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9846a398-b104-418e-ace9-6eb022ddacbb" containerName="heat-db-sync" Mar 12 16:25:42 crc kubenswrapper[4687]: E0312 16:25:42.569432 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" containerName="cinder-db-sync" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.569492 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" containerName="cinder-db-sync" Mar 12 16:25:42 crc kubenswrapper[4687]: E0312 16:25:42.569581 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="init" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.569636 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="init" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.569921 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" containerName="cinder-db-sync" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.569991 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15f16c6-ca65-49db-931f-d62df73da465" containerName="dnsmasq-dns" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.570065 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9846a398-b104-418e-ace9-6eb022ddacbb" containerName="heat-db-sync" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.571726 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.577640 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-868pz" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.577975 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.578120 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.581946 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.628158 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.653402 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9qv\" (UniqueName: \"kubernetes.io/projected/189425db-4e6c-4e63-94a5-0d32d6efea54-kube-api-access-rk9qv\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.653495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-scripts\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.653551 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.653601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.653665 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.653685 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/189425db-4e6c-4e63-94a5-0d32d6efea54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.673680 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2msn7"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.679094 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.717423 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2msn7"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756144 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/189425db-4e6c-4e63-94a5-0d32d6efea54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9qv\" (UniqueName: \"kubernetes.io/projected/189425db-4e6c-4e63-94a5-0d32d6efea54-kube-api-access-rk9qv\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756238 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz59w\" (UniqueName: \"kubernetes.io/projected/05c97834-0ab6-445e-9dd7-61e01484b052-kube-api-access-qz59w\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-scripts\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756337 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/189425db-4e6c-4e63-94a5-0d32d6efea54-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756383 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.756998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-config\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.757065 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.757086 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.766154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-scripts\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.766268 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.782884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.795072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9qv\" (UniqueName: \"kubernetes.io/projected/189425db-4e6c-4e63-94a5-0d32d6efea54-kube-api-access-rk9qv\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.802992 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.851601 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.853507 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.856696 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.860535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.860618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz59w\" (UniqueName: \"kubernetes.io/projected/05c97834-0ab6-445e-9dd7-61e01484b052-kube-api-access-qz59w\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.860709 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.860736 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-config\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.860770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.860786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.862262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.862862 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-config\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.865269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.866181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.866617 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.866709 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.895675 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.904141 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz59w\" (UniqueName: \"kubernetes.io/projected/05c97834-0ab6-445e-9dd7-61e01484b052-kube-api-access-qz59w\") pod \"dnsmasq-dns-5c9776ccc5-2msn7\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963229 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb71c491-fc79-46b2-946c-eaa131c7d104-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963394 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963413 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgg85\" (UniqueName: \"kubernetes.io/projected/fb71c491-fc79-46b2-946c-eaa131c7d104-kube-api-access-dgg85\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963428 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963504 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963524 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb71c491-fc79-46b2-946c-eaa131c7d104-logs\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:42 crc kubenswrapper[4687]: I0312 16:25:42.963550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-scripts\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.020930 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.066569 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb71c491-fc79-46b2-946c-eaa131c7d104-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.067016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.067038 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgg85\" (UniqueName: \"kubernetes.io/projected/fb71c491-fc79-46b2-946c-eaa131c7d104-kube-api-access-dgg85\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.067055 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.067141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.067165 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb71c491-fc79-46b2-946c-eaa131c7d104-logs\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.067191 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-scripts\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.068583 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb71c491-fc79-46b2-946c-eaa131c7d104-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.073452 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.073682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb71c491-fc79-46b2-946c-eaa131c7d104-logs\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.077932 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.078068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.085162 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-scripts\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.093847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgg85\" (UniqueName: \"kubernetes.io/projected/fb71c491-fc79-46b2-946c-eaa131c7d104-kube-api-access-dgg85\") pod \"cinder-api-0\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.161945 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.162829 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.163586 4687 scope.go:117] "RemoveContainer" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.168586 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.198:9696/\": dial tcp 10.217.0.198:9696: connect: connection refused" Mar 12 16:25:43 crc kubenswrapper[4687]: E0312 16:25:43.169218 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-784df9f744-9vs8d_openstack(3641fb4f-ebfa-4d32-a37c-ca304c44ccab)\"" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.201639 4687 generic.go:334] "Generic (PLEG): container finished" podID="ef7e01d7-21f4-49be-8700-994e45280f37" containerID="1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d" exitCode=0 Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.202245 4687 generic.go:334] "Generic (PLEG): container finished" podID="ef7e01d7-21f4-49be-8700-994e45280f37" containerID="f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb" exitCode=2 Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.202169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerDied","Data":"1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d"} Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.202409 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerDied","Data":"f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb"} Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.279748 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.319724 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.388879 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6844d7659d-l5x7z" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.479229 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bcccf94d8-svjfj"] Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.479474 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bcccf94d8-svjfj" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api-log" containerID="cri-o://5fd48ba724bb0d5cf43f0a408b6c888cddf727ea9c4306d966b43d3bde0fceb8" gracePeriod=30 Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.479913 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bcccf94d8-svjfj" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api" containerID="cri-o://bc0025b4605a3b64d76332cd5baac16b1e4833b5bc6d7a9c0e77b0fa085d32c4" gracePeriod=30 Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.508652 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bcccf94d8-svjfj" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.205:9311/healthcheck\": EOF" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.575061 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.763729 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15f16c6-ca65-49db-931f-d62df73da465" path="/var/lib/kubelet/pods/c15f16c6-ca65-49db-931f-d62df73da465/volumes" Mar 12 16:25:43 crc kubenswrapper[4687]: I0312 16:25:43.942316 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2msn7"] Mar 12 16:25:43 crc kubenswrapper[4687]: W0312 16:25:43.949489 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c97834_0ab6_445e_9dd7_61e01484b052.slice/crio-538b69710c0985528030f885fded691583b86ef757298a7a5a5277629dcaf1c6 WatchSource:0}: Error finding container 538b69710c0985528030f885fded691583b86ef757298a7a5a5277629dcaf1c6: Status 404 returned error can't find the container with id 538b69710c0985528030f885fded691583b86ef757298a7a5a5277629dcaf1c6 Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.045502 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.099171 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:25:44 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:25:44 crc kubenswrapper[4687]: > Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.233413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"189425db-4e6c-4e63-94a5-0d32d6efea54","Type":"ContainerStarted","Data":"78d59d5e9fa78cdd2b3d28fdce69757c2565b49f498d3ce6cac5d6626d7095fc"} Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.234376 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" event={"ID":"05c97834-0ab6-445e-9dd7-61e01484b052","Type":"ContainerStarted","Data":"538b69710c0985528030f885fded691583b86ef757298a7a5a5277629dcaf1c6"} Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.237500 4687 generic.go:334] "Generic (PLEG): container finished" podID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerID="5fd48ba724bb0d5cf43f0a408b6c888cddf727ea9c4306d966b43d3bde0fceb8" exitCode=143 Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.237556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcccf94d8-svjfj" event={"ID":"b59c8500-2bca-4d5a-b644-354f3b3b453e","Type":"ContainerDied","Data":"5fd48ba724bb0d5cf43f0a408b6c888cddf727ea9c4306d966b43d3bde0fceb8"} Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.243757 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb71c491-fc79-46b2-946c-eaa131c7d104","Type":"ContainerStarted","Data":"badd59a33e9f26226f1dc7bcae57f3559f5fe8155232b8615a36880ded55de48"} Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.244110 4687 scope.go:117] "RemoveContainer" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" Mar 12 16:25:44 crc kubenswrapper[4687]: E0312 16:25:44.244291 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-784df9f744-9vs8d_openstack(3641fb4f-ebfa-4d32-a37c-ca304c44ccab)\"" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" Mar 12 16:25:44 crc kubenswrapper[4687]: I0312 16:25:44.665212 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:45 crc kubenswrapper[4687]: I0312 16:25:45.255802 4687 generic.go:334] "Generic (PLEG): container finished" podID="05c97834-0ab6-445e-9dd7-61e01484b052" containerID="9dbf1b3ccb798c3048e5095e91afcac8fbf2c71140dad299b009d951a2c43129" exitCode=0 Mar 12 16:25:45 crc kubenswrapper[4687]: I0312 16:25:45.255874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" event={"ID":"05c97834-0ab6-445e-9dd7-61e01484b052","Type":"ContainerDied","Data":"9dbf1b3ccb798c3048e5095e91afcac8fbf2c71140dad299b009d951a2c43129"} Mar 12 16:25:45 crc kubenswrapper[4687]: I0312 16:25:45.538232 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:25:45 crc kubenswrapper[4687]: I0312 16:25:45.625428 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-784df9f744-9vs8d"] Mar 12 16:25:45 crc kubenswrapper[4687]: I0312 16:25:45.627811 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-784df9f744-9vs8d" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-api" containerID="cri-o://342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf" gracePeriod=30 Mar 12 16:25:45 crc kubenswrapper[4687]: I0312 16:25:45.993018 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-874865cfc-trxxb"] Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.067450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.085635 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-874865cfc-trxxb"] Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.168799 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-ovndb-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.169341 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-internal-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.169545 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-combined-ca-bundle\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.169663 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-public-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.169760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-config\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.169876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-httpd-config\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.169998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gphhl\" (UniqueName: \"kubernetes.io/projected/b10bb84e-71f9-4b10-8a9e-24e05136a576-kube-api-access-gphhl\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-ovndb-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-internal-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-combined-ca-bundle\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-public-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274837 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-config\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274879 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-httpd-config\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.274931 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gphhl\" (UniqueName: \"kubernetes.io/projected/b10bb84e-71f9-4b10-8a9e-24e05136a576-kube-api-access-gphhl\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.280699 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-combined-ca-bundle\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.281533 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-ovndb-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.285372 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-internal-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.286043 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-httpd-config\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.290614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb71c491-fc79-46b2-946c-eaa131c7d104","Type":"ContainerStarted","Data":"1f59f71b44a246f3734454130537a05c5a68c47777891361ecae2e70f494ab0a"} Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.299422 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"189425db-4e6c-4e63-94a5-0d32d6efea54","Type":"ContainerStarted","Data":"5f2ac392d8c96544e58eb065c34577a5e0dc0299dd3ff6dd014fad8af03a04fe"} Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.300489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-public-tls-certs\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.301306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gphhl\" (UniqueName: \"kubernetes.io/projected/b10bb84e-71f9-4b10-8a9e-24e05136a576-kube-api-access-gphhl\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.301645 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b10bb84e-71f9-4b10-8a9e-24e05136a576-config\") pod \"neutron-874865cfc-trxxb\" (UID: \"b10bb84e-71f9-4b10-8a9e-24e05136a576\") " pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.312731 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" event={"ID":"05c97834-0ab6-445e-9dd7-61e01484b052","Type":"ContainerStarted","Data":"1ec6a3b82a323a87dfab248865326c5a035e5f1581ada9e4164e13cf4c824a40"} Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.313109 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.337371 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" podStartSLOduration=4.337340597 podStartE2EDuration="4.337340597s" podCreationTimestamp="2026-03-12 16:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:46.332776603 +0000 UTC m=+1395.296738957" watchObservedRunningTime="2026-03-12 16:25:46.337340597 +0000 UTC m=+1395.301302941" Mar 12 16:25:46 crc kubenswrapper[4687]: I0312 16:25:46.407196 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.068859 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-874865cfc-trxxb"] Mar 12 16:25:47 crc kubenswrapper[4687]: W0312 16:25:47.084884 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb10bb84e_71f9_4b10_8a9e_24e05136a576.slice/crio-64e101261ad4b1ee9a4420cf17afc97e6a925a9b7c90ecd92bceeba062d05d9a WatchSource:0}: Error finding container 64e101261ad4b1ee9a4420cf17afc97e6a925a9b7c90ecd92bceeba062d05d9a: Status 404 returned error can't find the container with id 64e101261ad4b1ee9a4420cf17afc97e6a925a9b7c90ecd92bceeba062d05d9a Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.334277 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb71c491-fc79-46b2-946c-eaa131c7d104","Type":"ContainerStarted","Data":"eddfb1a02ff7f1dba246f0a87f5928991af2a3aadb1f8d130e7bd974d02ccdf1"} Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.334333 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api-log" containerID="cri-o://1f59f71b44a246f3734454130537a05c5a68c47777891361ecae2e70f494ab0a" gracePeriod=30 Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.334416 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api" containerID="cri-o://eddfb1a02ff7f1dba246f0a87f5928991af2a3aadb1f8d130e7bd974d02ccdf1" gracePeriod=30 Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.334673 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.339705 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-874865cfc-trxxb" event={"ID":"b10bb84e-71f9-4b10-8a9e-24e05136a576","Type":"ContainerStarted","Data":"64e101261ad4b1ee9a4420cf17afc97e6a925a9b7c90ecd92bceeba062d05d9a"} Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.346659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"189425db-4e6c-4e63-94a5-0d32d6efea54","Type":"ContainerStarted","Data":"6983076d1bcb86f0431b34ed74ad752a43dc7eb8564a402f49a70fa19845ed83"} Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.370430 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.370412195 podStartE2EDuration="5.370412195s" podCreationTimestamp="2026-03-12 16:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:47.366090187 +0000 UTC m=+1396.330052531" watchObservedRunningTime="2026-03-12 16:25:47.370412195 +0000 UTC m=+1396.334374539" Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.425150 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.5603165820000005 podStartE2EDuration="5.425126688s" podCreationTimestamp="2026-03-12 16:25:42 +0000 UTC" firstStartedPulling="2026-03-12 16:25:43.599745283 +0000 UTC m=+1392.563707627" lastFinishedPulling="2026-03-12 16:25:44.464555399 +0000 UTC m=+1393.428517733" observedRunningTime="2026-03-12 16:25:47.392688163 +0000 UTC m=+1396.356650507" watchObservedRunningTime="2026-03-12 16:25:47.425126688 +0000 UTC m=+1396.389089042" Mar 12 16:25:47 crc kubenswrapper[4687]: I0312 16:25:47.897418 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.035435 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158105 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-run-httpd\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158179 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-log-httpd\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-combined-ca-bundle\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158348 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k7d6\" (UniqueName: \"kubernetes.io/projected/ef7e01d7-21f4-49be-8700-994e45280f37-kube-api-access-7k7d6\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-sg-core-conf-yaml\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-config-data\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.158549 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-scripts\") pod \"ef7e01d7-21f4-49be-8700-994e45280f37\" (UID: \"ef7e01d7-21f4-49be-8700-994e45280f37\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.159652 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.181811 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.184002 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7e01d7-21f4-49be-8700-994e45280f37-kube-api-access-7k7d6" (OuterVolumeSpecName: "kube-api-access-7k7d6") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "kube-api-access-7k7d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.200499 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-scripts" (OuterVolumeSpecName: "scripts") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.246318 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.260787 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k7d6\" (UniqueName: \"kubernetes.io/projected/ef7e01d7-21f4-49be-8700-994e45280f37-kube-api-access-7k7d6\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.260816 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.260827 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.260834 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.260845 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef7e01d7-21f4-49be-8700-994e45280f37-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.333789 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-config-data" (OuterVolumeSpecName: "config-data") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.337453 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef7e01d7-21f4-49be-8700-994e45280f37" (UID: "ef7e01d7-21f4-49be-8700-994e45280f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.364072 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.364271 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef7e01d7-21f4-49be-8700-994e45280f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.367051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-874865cfc-trxxb" event={"ID":"b10bb84e-71f9-4b10-8a9e-24e05136a576","Type":"ContainerStarted","Data":"3d80ae687f9c5bb05b47b3cd42a143805f246b12dbf46975835e6d0d3cccab01"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.368144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-874865cfc-trxxb" event={"ID":"b10bb84e-71f9-4b10-8a9e-24e05136a576","Type":"ContainerStarted","Data":"6cfe1bad8c12cd07c4c730808c59766a6abbbd604ae92793e911adb11164a665"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.368388 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.378776 4687 generic.go:334] "Generic (PLEG): container finished" podID="ef7e01d7-21f4-49be-8700-994e45280f37" containerID="f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1" exitCode=0 Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.378867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerDied","Data":"f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.378895 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef7e01d7-21f4-49be-8700-994e45280f37","Type":"ContainerDied","Data":"9f14638c6a5a91f190ad6e4aa329d0d79c77155f1c74c7d5fb48289df3966bbd"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.378913 4687 scope.go:117] "RemoveContainer" containerID="1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.379053 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.386011 4687 generic.go:334] "Generic (PLEG): container finished" podID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerID="eddfb1a02ff7f1dba246f0a87f5928991af2a3aadb1f8d130e7bd974d02ccdf1" exitCode=0 Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.386040 4687 generic.go:334] "Generic (PLEG): container finished" podID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerID="1f59f71b44a246f3734454130537a05c5a68c47777891361ecae2e70f494ab0a" exitCode=143 Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.386476 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb71c491-fc79-46b2-946c-eaa131c7d104","Type":"ContainerDied","Data":"eddfb1a02ff7f1dba246f0a87f5928991af2a3aadb1f8d130e7bd974d02ccdf1"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.386535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb71c491-fc79-46b2-946c-eaa131c7d104","Type":"ContainerDied","Data":"1f59f71b44a246f3734454130537a05c5a68c47777891361ecae2e70f494ab0a"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.386546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb71c491-fc79-46b2-946c-eaa131c7d104","Type":"ContainerDied","Data":"badd59a33e9f26226f1dc7bcae57f3559f5fe8155232b8615a36880ded55de48"} Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.386556 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="badd59a33e9f26226f1dc7bcae57f3559f5fe8155232b8615a36880ded55de48" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.395463 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-874865cfc-trxxb" podStartSLOduration=3.395443733 podStartE2EDuration="3.395443733s" podCreationTimestamp="2026-03-12 16:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:48.391881345 +0000 UTC m=+1397.355843689" watchObservedRunningTime="2026-03-12 16:25:48.395443733 +0000 UTC m=+1397.359406077" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.445513 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.453415 4687 scope.go:117] "RemoveContainer" containerID="f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.504548 4687 scope.go:117] "RemoveContainer" containerID="f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.505185 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.515504 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.527831 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.528324 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="sg-core" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528338 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="sg-core" Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.528378 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api-log" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528387 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api-log" Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.528422 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="ceilometer-notification-agent" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528429 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="ceilometer-notification-agent" Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.528456 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="proxy-httpd" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528464 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="proxy-httpd" Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.528479 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528485 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528666 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="proxy-httpd" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528686 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="sg-core" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528696 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528707 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" containerName="ceilometer-notification-agent" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.528715 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" containerName="cinder-api-log" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.532872 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.535201 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.538739 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.552311 4687 scope.go:117] "RemoveContainer" containerID="1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d" Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.552903 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d\": container with ID starting with 1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d not found: ID does not exist" containerID="1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.552936 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d"} err="failed to get container status \"1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d\": rpc error: code = NotFound desc = could not find container \"1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d\": container with ID starting with 1c8cfb7c14a27bcb390f1c8b4b5667bb6bafd8931c6b485056612edd674f2a1d not found: ID does not exist" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.552957 4687 scope.go:117] "RemoveContainer" containerID="f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb" Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.553460 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb\": container with ID starting with f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb not found: ID does not exist" containerID="f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.553485 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb"} err="failed to get container status \"f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb\": rpc error: code = NotFound desc = could not find container \"f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb\": container with ID starting with f75ffa8ccbd9fd156944764f858621beaa5edc62e74b8e6abaaf86d0ff2e7acb not found: ID does not exist" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.553507 4687 scope.go:117] "RemoveContainer" containerID="f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.553820 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:25:48 crc kubenswrapper[4687]: E0312 16:25:48.554117 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1\": container with ID starting with f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1 not found: ID does not exist" containerID="f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.554145 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1"} err="failed to get container status \"f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1\": rpc error: code = NotFound desc = could not find container \"f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1\": container with ID starting with f6c182457bb02fb1a1ad6bd299bf91a3750d14aedd5f43e988cc996a012caca1 not found: ID does not exist" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570070 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgg85\" (UniqueName: \"kubernetes.io/projected/fb71c491-fc79-46b2-946c-eaa131c7d104-kube-api-access-dgg85\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570143 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-combined-ca-bundle\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570183 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-scripts\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570230 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb71c491-fc79-46b2-946c-eaa131c7d104-logs\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570304 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data-custom\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570431 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb71c491-fc79-46b2-946c-eaa131c7d104-etc-machine-id\") pod \"fb71c491-fc79-46b2-946c-eaa131c7d104\" (UID: \"fb71c491-fc79-46b2-946c-eaa131c7d104\") " Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-config-data\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-log-httpd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.570948 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-scripts\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.571011 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-run-httpd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.571046 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.571073 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nrd\" (UniqueName: \"kubernetes.io/projected/70b6d42e-354f-4b20-bbce-8083586b8630-kube-api-access-v5nrd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.571457 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb71c491-fc79-46b2-946c-eaa131c7d104-logs" (OuterVolumeSpecName: "logs") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.571616 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb71c491-fc79-46b2-946c-eaa131c7d104-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.574049 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb71c491-fc79-46b2-946c-eaa131c7d104-kube-api-access-dgg85" (OuterVolumeSpecName: "kube-api-access-dgg85") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "kube-api-access-dgg85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.575197 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.575290 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-scripts" (OuterVolumeSpecName: "scripts") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.629599 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.640469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data" (OuterVolumeSpecName: "config-data") pod "fb71c491-fc79-46b2-946c-eaa131c7d104" (UID: "fb71c491-fc79-46b2-946c-eaa131c7d104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673446 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673515 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nrd\" (UniqueName: \"kubernetes.io/projected/70b6d42e-354f-4b20-bbce-8083586b8630-kube-api-access-v5nrd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-config-data\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-log-httpd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-scripts\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673852 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-run-httpd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673927 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgg85\" (UniqueName: \"kubernetes.io/projected/fb71c491-fc79-46b2-946c-eaa131c7d104-kube-api-access-dgg85\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673950 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673963 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673975 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb71c491-fc79-46b2-946c-eaa131c7d104-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673985 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.673995 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb71c491-fc79-46b2-946c-eaa131c7d104-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.674007 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb71c491-fc79-46b2-946c-eaa131c7d104-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.674580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-run-httpd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.674611 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-log-httpd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.677243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.677858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.678550 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-config-data\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.682703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-scripts\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.691391 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nrd\" (UniqueName: \"kubernetes.io/projected/70b6d42e-354f-4b20-bbce-8083586b8630-kube-api-access-v5nrd\") pod \"ceilometer-0\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " pod="openstack/ceilometer-0" Mar 12 16:25:48 crc kubenswrapper[4687]: I0312 16:25:48.852784 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.007614 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bcccf94d8-svjfj" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.205:9311/healthcheck\": read tcp 10.217.0.2:37608->10.217.0.205:9311: read: connection reset by peer" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.007614 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bcccf94d8-svjfj" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.205:9311/healthcheck\": read tcp 10.217.0.2:37600->10.217.0.205:9311: read: connection reset by peer" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.399716 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.408620 4687 generic.go:334] "Generic (PLEG): container finished" podID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerID="bc0025b4605a3b64d76332cd5baac16b1e4833b5bc6d7a9c0e77b0fa085d32c4" exitCode=0 Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.408654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcccf94d8-svjfj" event={"ID":"b59c8500-2bca-4d5a-b644-354f3b3b453e","Type":"ContainerDied","Data":"bc0025b4605a3b64d76332cd5baac16b1e4833b5bc6d7a9c0e77b0fa085d32c4"} Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.408781 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.547008 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.568458 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.581534 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.598038 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:49 crc kubenswrapper[4687]: E0312 16:25:49.599750 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.599875 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api" Mar 12 16:25:49 crc kubenswrapper[4687]: E0312 16:25:49.599989 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api-log" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.600072 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api-log" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.601281 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api-log" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.601409 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" containerName="barbican-api" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.601997 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data\") pod \"b59c8500-2bca-4d5a-b644-354f3b3b453e\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.602097 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-combined-ca-bundle\") pod \"b59c8500-2bca-4d5a-b644-354f3b3b453e\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.602143 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls6fl\" (UniqueName: \"kubernetes.io/projected/b59c8500-2bca-4d5a-b644-354f3b3b453e-kube-api-access-ls6fl\") pod \"b59c8500-2bca-4d5a-b644-354f3b3b453e\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.602207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59c8500-2bca-4d5a-b644-354f3b3b453e-logs\") pod \"b59c8500-2bca-4d5a-b644-354f3b3b453e\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.602243 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data-custom\") pod \"b59c8500-2bca-4d5a-b644-354f3b3b453e\" (UID: \"b59c8500-2bca-4d5a-b644-354f3b3b453e\") " Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.603594 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.603959 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b59c8500-2bca-4d5a-b644-354f3b3b453e-logs" (OuterVolumeSpecName: "logs") pod "b59c8500-2bca-4d5a-b644-354f3b3b453e" (UID: "b59c8500-2bca-4d5a-b644-354f3b3b453e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.609960 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.610276 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.610803 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b59c8500-2bca-4d5a-b644-354f3b3b453e" (UID: "b59c8500-2bca-4d5a-b644-354f3b3b453e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.610944 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.622015 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59c8500-2bca-4d5a-b644-354f3b3b453e-kube-api-access-ls6fl" (OuterVolumeSpecName: "kube-api-access-ls6fl") pod "b59c8500-2bca-4d5a-b644-354f3b3b453e" (UID: "b59c8500-2bca-4d5a-b644-354f3b3b453e"). InnerVolumeSpecName "kube-api-access-ls6fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.647636 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.656328 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b59c8500-2bca-4d5a-b644-354f3b3b453e" (UID: "b59c8500-2bca-4d5a-b644-354f3b3b453e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.682584 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data" (OuterVolumeSpecName: "config-data") pod "b59c8500-2bca-4d5a-b644-354f3b3b453e" (UID: "b59c8500-2bca-4d5a-b644-354f3b3b453e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705118 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705182 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-config-data\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705290 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f67753ae-a56b-4974-93e0-70122db7ebde-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705529 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7lj\" (UniqueName: \"kubernetes.io/projected/f67753ae-a56b-4974-93e0-70122db7ebde-kube-api-access-rf7lj\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-config-data-custom\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.705814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-scripts\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67753ae-a56b-4974-93e0-70122db7ebde-logs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706442 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706463 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706477 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls6fl\" (UniqueName: \"kubernetes.io/projected/b59c8500-2bca-4d5a-b644-354f3b3b453e-kube-api-access-ls6fl\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706502 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59c8500-2bca-4d5a-b644-354f3b3b453e-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.706512 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b59c8500-2bca-4d5a-b644-354f3b3b453e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.751218 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7e01d7-21f4-49be-8700-994e45280f37" path="/var/lib/kubelet/pods/ef7e01d7-21f4-49be-8700-994e45280f37/volumes" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.752156 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb71c491-fc79-46b2-946c-eaa131c7d104" path="/var/lib/kubelet/pods/fb71c491-fc79-46b2-946c-eaa131c7d104/volumes" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.808753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f67753ae-a56b-4974-93e0-70122db7ebde-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.809704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.809832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf7lj\" (UniqueName: \"kubernetes.io/projected/f67753ae-a56b-4974-93e0-70122db7ebde-kube-api-access-rf7lj\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.809944 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f67753ae-a56b-4974-93e0-70122db7ebde-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.809867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-config-data-custom\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.810277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-scripts\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.810875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.811445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67753ae-a56b-4974-93e0-70122db7ebde-logs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.811668 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.811717 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-config-data\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.814082 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.814100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67753ae-a56b-4974-93e0-70122db7ebde-logs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.815166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-scripts\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.815432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.816607 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-config-data-custom\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.818824 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.819703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67753ae-a56b-4974-93e0-70122db7ebde-config-data\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.827386 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf7lj\" (UniqueName: \"kubernetes.io/projected/f67753ae-a56b-4974-93e0-70122db7ebde-kube-api-access-rf7lj\") pod \"cinder-api-0\" (UID: \"f67753ae-a56b-4974-93e0-70122db7ebde\") " pod="openstack/cinder-api-0" Mar 12 16:25:49 crc kubenswrapper[4687]: I0312 16:25:49.983994 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.251269 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/2.log" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.252706 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.327312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-combined-ca-bundle\") pod \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.327417 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-httpd-config\") pod \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.327597 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-config\") pod \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.327661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-ovndb-tls-certs\") pod \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.327686 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc568\" (UniqueName: \"kubernetes.io/projected/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-kube-api-access-bc568\") pod \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\" (UID: \"3641fb4f-ebfa-4d32-a37c-ca304c44ccab\") " Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.333847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3641fb4f-ebfa-4d32-a37c-ca304c44ccab" (UID: "3641fb4f-ebfa-4d32-a37c-ca304c44ccab"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.359211 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-kube-api-access-bc568" (OuterVolumeSpecName: "kube-api-access-bc568") pod "3641fb4f-ebfa-4d32-a37c-ca304c44ccab" (UID: "3641fb4f-ebfa-4d32-a37c-ca304c44ccab"). InnerVolumeSpecName "kube-api-access-bc568". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.423746 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3641fb4f-ebfa-4d32-a37c-ca304c44ccab" (UID: "3641fb4f-ebfa-4d32-a37c-ca304c44ccab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.429111 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bcccf94d8-svjfj" event={"ID":"b59c8500-2bca-4d5a-b644-354f3b3b453e","Type":"ContainerDied","Data":"312e03fb9b1f5f857fece5cb28a803c2f6e8a0ba3c55e3085b433c52049050a3"} Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.429169 4687 scope.go:117] "RemoveContainer" containerID="bc0025b4605a3b64d76332cd5baac16b1e4833b5bc6d7a9c0e77b0fa085d32c4" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.429129 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bcccf94d8-svjfj" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.430237 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.430258 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.430267 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc568\" (UniqueName: \"kubernetes.io/projected/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-kube-api-access-bc568\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.430938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerStarted","Data":"d57f4f4a474f131d79e1647278f27a7542bd93372cf57db6ca10c510fe10a983"} Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.430971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerStarted","Data":"8dc84e0286b083d2bca890dd532ac8adad72d49a81e2252ea4fe641f1c521946"} Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.432915 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-config" (OuterVolumeSpecName: "config") pod "3641fb4f-ebfa-4d32-a37c-ca304c44ccab" (UID: "3641fb4f-ebfa-4d32-a37c-ca304c44ccab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.432863 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-784df9f744-9vs8d_3641fb4f-ebfa-4d32-a37c-ca304c44ccab/neutron-httpd/2.log" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.433637 4687 generic.go:334] "Generic (PLEG): container finished" podID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerID="342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf" exitCode=0 Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.433827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerDied","Data":"342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf"} Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.433876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-784df9f744-9vs8d" event={"ID":"3641fb4f-ebfa-4d32-a37c-ca304c44ccab","Type":"ContainerDied","Data":"3b1d40c1bcd374f05576d12410233c9c546189bbe71d2b66b2a16a6c6a628b68"} Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.437872 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-784df9f744-9vs8d" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.472912 4687 scope.go:117] "RemoveContainer" containerID="5fd48ba724bb0d5cf43f0a408b6c888cddf727ea9c4306d966b43d3bde0fceb8" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.474022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3641fb4f-ebfa-4d32-a37c-ca304c44ccab" (UID: "3641fb4f-ebfa-4d32-a37c-ca304c44ccab"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.503266 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bcccf94d8-svjfj"] Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.506704 4687 scope.go:117] "RemoveContainer" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" Mar 12 16:25:50 crc kubenswrapper[4687]: W0312 16:25:50.523515 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf67753ae_a56b_4974_93e0_70122db7ebde.slice/crio-6c172e83428c587cb225af95787b6c3b395aeb15fb79277ace8fe32811bc572a WatchSource:0}: Error finding container 6c172e83428c587cb225af95787b6c3b395aeb15fb79277ace8fe32811bc572a: Status 404 returned error can't find the container with id 6c172e83428c587cb225af95787b6c3b395aeb15fb79277ace8fe32811bc572a Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.523552 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bcccf94d8-svjfj"] Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.533292 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.533329 4687 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3641fb4f-ebfa-4d32-a37c-ca304c44ccab-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.537118 4687 scope.go:117] "RemoveContainer" containerID="342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.541746 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.579398 4687 scope.go:117] "RemoveContainer" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" Mar 12 16:25:50 crc kubenswrapper[4687]: E0312 16:25:50.580045 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d\": container with ID starting with d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d not found: ID does not exist" containerID="d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.580146 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d"} err="failed to get container status \"d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d\": rpc error: code = NotFound desc = could not find container \"d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d\": container with ID starting with d0e863ba85e22bc8e027b32bf1c79527175a8aeca0a4eee1a22ee131a23e602d not found: ID does not exist" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.580181 4687 scope.go:117] "RemoveContainer" containerID="342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf" Mar 12 16:25:50 crc kubenswrapper[4687]: E0312 16:25:50.580541 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf\": container with ID starting with 342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf not found: ID does not exist" containerID="342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.580571 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf"} err="failed to get container status \"342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf\": rpc error: code = NotFound desc = could not find container \"342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf\": container with ID starting with 342029029e405baf2a8f3ae99332b13324e60c7c29e9e8e1617e900152aeb1cf not found: ID does not exist" Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.783504 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-784df9f744-9vs8d"] Mar 12 16:25:50 crc kubenswrapper[4687]: I0312 16:25:50.796138 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-784df9f744-9vs8d"] Mar 12 16:25:51 crc kubenswrapper[4687]: I0312 16:25:51.446200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f67753ae-a56b-4974-93e0-70122db7ebde","Type":"ContainerStarted","Data":"8accc866263eca5d6d348016ebd8e935bddd4cd5c89cd8de79816cf2023521bd"} Mar 12 16:25:51 crc kubenswrapper[4687]: I0312 16:25:51.446539 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f67753ae-a56b-4974-93e0-70122db7ebde","Type":"ContainerStarted","Data":"6c172e83428c587cb225af95787b6c3b395aeb15fb79277ace8fe32811bc572a"} Mar 12 16:25:51 crc kubenswrapper[4687]: I0312 16:25:51.450986 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerStarted","Data":"59fc4f1a8357d53e70565ddcea7eb11ffe8658f946eb10831e6ab53752487f24"} Mar 12 16:25:51 crc kubenswrapper[4687]: I0312 16:25:51.761455 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" path="/var/lib/kubelet/pods/3641fb4f-ebfa-4d32-a37c-ca304c44ccab/volumes" Mar 12 16:25:51 crc kubenswrapper[4687]: I0312 16:25:51.762835 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59c8500-2bca-4d5a-b644-354f3b3b453e" path="/var/lib/kubelet/pods/b59c8500-2bca-4d5a-b644-354f3b3b453e/volumes" Mar 12 16:25:52 crc kubenswrapper[4687]: I0312 16:25:52.468022 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f67753ae-a56b-4974-93e0-70122db7ebde","Type":"ContainerStarted","Data":"f1e48e1d630ecd09323a7bdf1bd7bdeeccd73aa47b8609eb46f487149fcacf6b"} Mar 12 16:25:52 crc kubenswrapper[4687]: I0312 16:25:52.468403 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 16:25:52 crc kubenswrapper[4687]: I0312 16:25:52.470681 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerStarted","Data":"efe86ff2a947f9dcedb7535696abd5fafdffdbd2960292efdaf78693978ecf69"} Mar 12 16:25:52 crc kubenswrapper[4687]: I0312 16:25:52.492815 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.492800099 podStartE2EDuration="3.492800099s" podCreationTimestamp="2026-03-12 16:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:52.491489673 +0000 UTC m=+1401.455452037" watchObservedRunningTime="2026-03-12 16:25:52.492800099 +0000 UTC m=+1401.456762443" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.022439 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.089031 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gs2gh"] Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.089284 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" podUID="920e5784-4b51-47ac-838d-4d216d80f128" containerName="dnsmasq-dns" containerID="cri-o://38ef10712bcec9d8a116e141b4469fc684084441857c56a6e295345167f3f09f" gracePeriod=10 Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.203762 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.267268 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.503308 4687 generic.go:334] "Generic (PLEG): container finished" podID="920e5784-4b51-47ac-838d-4d216d80f128" containerID="38ef10712bcec9d8a116e141b4469fc684084441857c56a6e295345167f3f09f" exitCode=0 Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.503636 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="cinder-scheduler" containerID="cri-o://5f2ac392d8c96544e58eb065c34577a5e0dc0299dd3ff6dd014fad8af03a04fe" gracePeriod=30 Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.503928 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="probe" containerID="cri-o://6983076d1bcb86f0431b34ed74ad752a43dc7eb8564a402f49a70fa19845ed83" gracePeriod=30 Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.503976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" event={"ID":"920e5784-4b51-47ac-838d-4d216d80f128","Type":"ContainerDied","Data":"38ef10712bcec9d8a116e141b4469fc684084441857c56a6e295345167f3f09f"} Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.784598 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.809705 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-nb\") pod \"920e5784-4b51-47ac-838d-4d216d80f128\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.809860 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-swift-storage-0\") pod \"920e5784-4b51-47ac-838d-4d216d80f128\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.809944 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-sb\") pod \"920e5784-4b51-47ac-838d-4d216d80f128\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.809997 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-config\") pod \"920e5784-4b51-47ac-838d-4d216d80f128\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.810037 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-svc\") pod \"920e5784-4b51-47ac-838d-4d216d80f128\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.810405 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lqmz\" (UniqueName: \"kubernetes.io/projected/920e5784-4b51-47ac-838d-4d216d80f128-kube-api-access-9lqmz\") pod \"920e5784-4b51-47ac-838d-4d216d80f128\" (UID: \"920e5784-4b51-47ac-838d-4d216d80f128\") " Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.817702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920e5784-4b51-47ac-838d-4d216d80f128-kube-api-access-9lqmz" (OuterVolumeSpecName: "kube-api-access-9lqmz") pod "920e5784-4b51-47ac-838d-4d216d80f128" (UID: "920e5784-4b51-47ac-838d-4d216d80f128"). InnerVolumeSpecName "kube-api-access-9lqmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.818628 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lqmz\" (UniqueName: \"kubernetes.io/projected/920e5784-4b51-47ac-838d-4d216d80f128-kube-api-access-9lqmz\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.978964 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "920e5784-4b51-47ac-838d-4d216d80f128" (UID: "920e5784-4b51-47ac-838d-4d216d80f128"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.988857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "920e5784-4b51-47ac-838d-4d216d80f128" (UID: "920e5784-4b51-47ac-838d-4d216d80f128"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:53 crc kubenswrapper[4687]: I0312 16:25:53.999702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "920e5784-4b51-47ac-838d-4d216d80f128" (UID: "920e5784-4b51-47ac-838d-4d216d80f128"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.000554 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:25:54 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:25:54 crc kubenswrapper[4687]: > Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.020458 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "920e5784-4b51-47ac-838d-4d216d80f128" (UID: "920e5784-4b51-47ac-838d-4d216d80f128"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.022025 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-config" (OuterVolumeSpecName: "config") pod "920e5784-4b51-47ac-838d-4d216d80f128" (UID: "920e5784-4b51-47ac-838d-4d216d80f128"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.022625 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.022653 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.022667 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.022679 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.022690 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920e5784-4b51-47ac-838d-4d216d80f128-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.516688 4687 generic.go:334] "Generic (PLEG): container finished" podID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerID="6983076d1bcb86f0431b34ed74ad752a43dc7eb8564a402f49a70fa19845ed83" exitCode=0 Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.516754 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"189425db-4e6c-4e63-94a5-0d32d6efea54","Type":"ContainerDied","Data":"6983076d1bcb86f0431b34ed74ad752a43dc7eb8564a402f49a70fa19845ed83"} Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.519004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" event={"ID":"920e5784-4b51-47ac-838d-4d216d80f128","Type":"ContainerDied","Data":"84cf75fb5a90e17541ab86af27f84f4fcec1dcf204ab81915cb2f344b767ff48"} Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.519028 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gs2gh" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.519065 4687 scope.go:117] "RemoveContainer" containerID="38ef10712bcec9d8a116e141b4469fc684084441857c56a6e295345167f3f09f" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.523727 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerStarted","Data":"4895ce23a2fe4db3735761d6e59f63cf0e585dc8c22332278dba4929777baa34"} Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.523870 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.563322 4687 scope.go:117] "RemoveContainer" containerID="ed33309c39a0eab3f783c65994e3839a239e5930a896612576502559af4a8275" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.573684 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.363917722 podStartE2EDuration="6.573659625s" podCreationTimestamp="2026-03-12 16:25:48 +0000 UTC" firstStartedPulling="2026-03-12 16:25:49.434712129 +0000 UTC m=+1398.398674473" lastFinishedPulling="2026-03-12 16:25:53.644454032 +0000 UTC m=+1402.608416376" observedRunningTime="2026-03-12 16:25:54.546019621 +0000 UTC m=+1403.509981975" watchObservedRunningTime="2026-03-12 16:25:54.573659625 +0000 UTC m=+1403.537621969" Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.578307 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gs2gh"] Mar 12 16:25:54 crc kubenswrapper[4687]: I0312 16:25:54.590691 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gs2gh"] Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.543512 4687 generic.go:334] "Generic (PLEG): container finished" podID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerID="5f2ac392d8c96544e58eb065c34577a5e0dc0299dd3ff6dd014fad8af03a04fe" exitCode=0 Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.544698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"189425db-4e6c-4e63-94a5-0d32d6efea54","Type":"ContainerDied","Data":"5f2ac392d8c96544e58eb065c34577a5e0dc0299dd3ff6dd014fad8af03a04fe"} Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.764460 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920e5784-4b51-47ac-838d-4d216d80f128" path="/var/lib/kubelet/pods/920e5784-4b51-47ac-838d-4d216d80f128/volumes" Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.892128 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.964417 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/189425db-4e6c-4e63-94a5-0d32d6efea54-etc-machine-id\") pod \"189425db-4e6c-4e63-94a5-0d32d6efea54\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.964508 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-scripts\") pod \"189425db-4e6c-4e63-94a5-0d32d6efea54\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.964537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data\") pod \"189425db-4e6c-4e63-94a5-0d32d6efea54\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.964773 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk9qv\" (UniqueName: \"kubernetes.io/projected/189425db-4e6c-4e63-94a5-0d32d6efea54-kube-api-access-rk9qv\") pod \"189425db-4e6c-4e63-94a5-0d32d6efea54\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.964865 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-combined-ca-bundle\") pod \"189425db-4e6c-4e63-94a5-0d32d6efea54\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.964891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data-custom\") pod \"189425db-4e6c-4e63-94a5-0d32d6efea54\" (UID: \"189425db-4e6c-4e63-94a5-0d32d6efea54\") " Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.967345 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/189425db-4e6c-4e63-94a5-0d32d6efea54-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "189425db-4e6c-4e63-94a5-0d32d6efea54" (UID: "189425db-4e6c-4e63-94a5-0d32d6efea54"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.976457 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "189425db-4e6c-4e63-94a5-0d32d6efea54" (UID: "189425db-4e6c-4e63-94a5-0d32d6efea54"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.989108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-scripts" (OuterVolumeSpecName: "scripts") pod "189425db-4e6c-4e63-94a5-0d32d6efea54" (UID: "189425db-4e6c-4e63-94a5-0d32d6efea54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:55 crc kubenswrapper[4687]: I0312 16:25:55.989299 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189425db-4e6c-4e63-94a5-0d32d6efea54-kube-api-access-rk9qv" (OuterVolumeSpecName: "kube-api-access-rk9qv") pod "189425db-4e6c-4e63-94a5-0d32d6efea54" (UID: "189425db-4e6c-4e63-94a5-0d32d6efea54"). InnerVolumeSpecName "kube-api-access-rk9qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.035605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "189425db-4e6c-4e63-94a5-0d32d6efea54" (UID: "189425db-4e6c-4e63-94a5-0d32d6efea54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.067478 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk9qv\" (UniqueName: \"kubernetes.io/projected/189425db-4e6c-4e63-94a5-0d32d6efea54-kube-api-access-rk9qv\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.067509 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.067535 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.067545 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/189425db-4e6c-4e63-94a5-0d32d6efea54-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.067554 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.089838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data" (OuterVolumeSpecName: "config-data") pod "189425db-4e6c-4e63-94a5-0d32d6efea54" (UID: "189425db-4e6c-4e63-94a5-0d32d6efea54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.169176 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/189425db-4e6c-4e63-94a5-0d32d6efea54-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.554793 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"189425db-4e6c-4e63-94a5-0d32d6efea54","Type":"ContainerDied","Data":"78d59d5e9fa78cdd2b3d28fdce69757c2565b49f498d3ce6cac5d6626d7095fc"} Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.554863 4687 scope.go:117] "RemoveContainer" containerID="6983076d1bcb86f0431b34ed74ad752a43dc7eb8564a402f49a70fa19845ed83" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.554913 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.589384 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.598608 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.608570 4687 scope.go:117] "RemoveContainer" containerID="5f2ac392d8c96544e58eb065c34577a5e0dc0299dd3ff6dd014fad8af03a04fe" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.624436 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625044 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920e5784-4b51-47ac-838d-4d216d80f128" containerName="init" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625066 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="920e5784-4b51-47ac-838d-4d216d80f128" containerName="init" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625082 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625090 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625105 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="probe" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625114 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="probe" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625127 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920e5784-4b51-47ac-838d-4d216d80f128" containerName="dnsmasq-dns" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625137 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="920e5784-4b51-47ac-838d-4d216d80f128" containerName="dnsmasq-dns" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625176 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625184 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625203 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625211 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625229 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-api" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625236 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-api" Mar 12 16:25:56 crc kubenswrapper[4687]: E0312 16:25:56.625255 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="cinder-scheduler" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625265 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="cinder-scheduler" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625571 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625596 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-api" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625618 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="cinder-scheduler" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625627 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="920e5784-4b51-47ac-838d-4d216d80f128" containerName="dnsmasq-dns" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625644 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" containerName="probe" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.625660 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.626181 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641fb4f-ebfa-4d32-a37c-ca304c44ccab" containerName="neutron-httpd" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.627242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.633856 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.653569 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.679740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.679809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.679837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.679945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba053ef-190c-4642-ac17-9876798b2390-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.680153 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.680228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jbf\" (UniqueName: \"kubernetes.io/projected/0ba053ef-190c-4642-ac17-9876798b2390-kube-api-access-q4jbf\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.781805 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba053ef-190c-4642-ac17-9876798b2390-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.782154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.781891 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba053ef-190c-4642-ac17-9876798b2390-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.782184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4jbf\" (UniqueName: \"kubernetes.io/projected/0ba053ef-190c-4642-ac17-9876798b2390-kube-api-access-q4jbf\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.782503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.782609 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.782639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.787771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-scripts\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.787963 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-config-data\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.798724 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.799840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba053ef-190c-4642-ac17-9876798b2390-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.800340 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4jbf\" (UniqueName: \"kubernetes.io/projected/0ba053ef-190c-4642-ac17-9876798b2390-kube-api-access-q4jbf\") pod \"cinder-scheduler-0\" (UID: \"0ba053ef-190c-4642-ac17-9876798b2390\") " pod="openstack/cinder-scheduler-0" Mar 12 16:25:56 crc kubenswrapper[4687]: I0312 16:25:56.961280 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 16:25:57 crc kubenswrapper[4687]: W0312 16:25:57.457595 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba053ef_190c_4642_ac17_9876798b2390.slice/crio-f06c0036800564068733421a86ed03560021b3ee3f2361a5fac36448d4089224 WatchSource:0}: Error finding container f06c0036800564068733421a86ed03560021b3ee3f2361a5fac36448d4089224: Status 404 returned error can't find the container with id f06c0036800564068733421a86ed03560021b3ee3f2361a5fac36448d4089224 Mar 12 16:25:57 crc kubenswrapper[4687]: I0312 16:25:57.487512 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 16:25:57 crc kubenswrapper[4687]: I0312 16:25:57.578210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba053ef-190c-4642-ac17-9876798b2390","Type":"ContainerStarted","Data":"f06c0036800564068733421a86ed03560021b3ee3f2361a5fac36448d4089224"} Mar 12 16:25:57 crc kubenswrapper[4687]: I0312 16:25:57.763410 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189425db-4e6c-4e63-94a5-0d32d6efea54" path="/var/lib/kubelet/pods/189425db-4e6c-4e63-94a5-0d32d6efea54/volumes" Mar 12 16:25:58 crc kubenswrapper[4687]: I0312 16:25:58.403840 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:58 crc kubenswrapper[4687]: I0312 16:25:58.406103 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:25:58 crc kubenswrapper[4687]: I0312 16:25:58.635924 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba053ef-190c-4642-ac17-9876798b2390","Type":"ContainerStarted","Data":"c5372e7800e6cb562729bfd14e6d267da1c44ca0c8855028bd659a66a0367158"} Mar 12 16:25:58 crc kubenswrapper[4687]: I0312 16:25:58.937828 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:58 crc kubenswrapper[4687]: I0312 16:25:58.975774 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84c558f4db-5rcnd" Mar 12 16:25:59 crc kubenswrapper[4687]: I0312 16:25:59.136390 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7d44dff674-gspmf"] Mar 12 16:25:59 crc kubenswrapper[4687]: I0312 16:25:59.645775 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba053ef-190c-4642-ac17-9876798b2390","Type":"ContainerStarted","Data":"084c61e166e87c30c107524cf4045138d06b1601c6c75da9b325270670d565da"} Mar 12 16:25:59 crc kubenswrapper[4687]: I0312 16:25:59.646332 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7d44dff674-gspmf" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-api" containerID="cri-o://db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210" gracePeriod=30 Mar 12 16:25:59 crc kubenswrapper[4687]: I0312 16:25:59.646307 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7d44dff674-gspmf" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-log" containerID="cri-o://bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8" gracePeriod=30 Mar 12 16:25:59 crc kubenswrapper[4687]: I0312 16:25:59.675797 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.675779576 podStartE2EDuration="3.675779576s" podCreationTimestamp="2026-03-12 16:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:25:59.670349118 +0000 UTC m=+1408.634311482" watchObservedRunningTime="2026-03-12 16:25:59.675779576 +0000 UTC m=+1408.639741910" Mar 12 16:25:59 crc kubenswrapper[4687]: I0312 16:25:59.720976 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-684c74d595-mgzvt" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.131888 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555546-l7bqh"] Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.134126 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.136447 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.136688 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.137058 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.142467 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555546-l7bqh"] Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.195820 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/17278eb4-0763-4788-b98d-d58198b0add7-kube-api-access-c82x5\") pod \"auto-csr-approver-29555546-l7bqh\" (UID: \"17278eb4-0763-4788-b98d-d58198b0add7\") " pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.297425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/17278eb4-0763-4788-b98d-d58198b0add7-kube-api-access-c82x5\") pod \"auto-csr-approver-29555546-l7bqh\" (UID: \"17278eb4-0763-4788-b98d-d58198b0add7\") " pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.325059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/17278eb4-0763-4788-b98d-d58198b0add7-kube-api-access-c82x5\") pod \"auto-csr-approver-29555546-l7bqh\" (UID: \"17278eb4-0763-4788-b98d-d58198b0add7\") " pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.455292 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.675148 4687 generic.go:334] "Generic (PLEG): container finished" podID="66165239-6c53-469e-97c0-fbcec87f22d3" containerID="bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8" exitCode=143 Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.675344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d44dff674-gspmf" event={"ID":"66165239-6c53-469e-97c0-fbcec87f22d3","Type":"ContainerDied","Data":"bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8"} Mar 12 16:26:00 crc kubenswrapper[4687]: I0312 16:26:00.972909 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555546-l7bqh"] Mar 12 16:26:01 crc kubenswrapper[4687]: I0312 16:26:01.690705 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" event={"ID":"17278eb4-0763-4788-b98d-d58198b0add7","Type":"ContainerStarted","Data":"26b5495add3afacf17d526d3d8401be2dc9168e3901894b4d48ef3b0c4b78940"} Mar 12 16:26:01 crc kubenswrapper[4687]: I0312 16:26:01.962407 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.546787 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.942498 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.944009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.948519 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-njml2" Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.948743 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.953170 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 16:26:02 crc kubenswrapper[4687]: I0312 16:26:02.958321 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.061528 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.061621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69db\" (UniqueName: \"kubernetes.io/projected/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-kube-api-access-n69db\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.061661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-openstack-config\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.061753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-openstack-config-secret\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.164755 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69db\" (UniqueName: \"kubernetes.io/projected/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-kube-api-access-n69db\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.165048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-openstack-config\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.165211 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-openstack-config-secret\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.165463 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.170664 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-openstack-config\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.174545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.177932 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-openstack-config-secret\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.194267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69db\" (UniqueName: \"kubernetes.io/projected/d39f34d0-cde8-47bd-8bfc-929c8cf9de03-kube-api-access-n69db\") pod \"openstackclient\" (UID: \"d39f34d0-cde8-47bd-8bfc-929c8cf9de03\") " pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.269174 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.498163 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.576567 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-combined-ca-bundle\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.577655 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-config-data\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.577771 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-public-tls-certs\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.577790 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-scripts\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.577868 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4nwq\" (UniqueName: \"kubernetes.io/projected/66165239-6c53-469e-97c0-fbcec87f22d3-kube-api-access-h4nwq\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.577900 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66165239-6c53-469e-97c0-fbcec87f22d3-logs\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.578013 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-internal-tls-certs\") pod \"66165239-6c53-469e-97c0-fbcec87f22d3\" (UID: \"66165239-6c53-469e-97c0-fbcec87f22d3\") " Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.587389 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66165239-6c53-469e-97c0-fbcec87f22d3-logs" (OuterVolumeSpecName: "logs") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.612592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66165239-6c53-469e-97c0-fbcec87f22d3-kube-api-access-h4nwq" (OuterVolumeSpecName: "kube-api-access-h4nwq") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "kube-api-access-h4nwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.636494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-scripts" (OuterVolumeSpecName: "scripts") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.680892 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.680921 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4nwq\" (UniqueName: \"kubernetes.io/projected/66165239-6c53-469e-97c0-fbcec87f22d3-kube-api-access-h4nwq\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.680933 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66165239-6c53-469e-97c0-fbcec87f22d3-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.786355 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.792790 4687 generic.go:334] "Generic (PLEG): container finished" podID="17278eb4-0763-4788-b98d-d58198b0add7" containerID="f31810308c6128e83c2297f6b5fd09f6660da7129cbb5699a5bcdd5393731a2e" exitCode=0 Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.816235 4687 generic.go:334] "Generic (PLEG): container finished" podID="66165239-6c53-469e-97c0-fbcec87f22d3" containerID="db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210" exitCode=0 Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.816315 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d44dff674-gspmf" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.845587 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-config-data" (OuterVolumeSpecName: "config-data") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.870609 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" event={"ID":"17278eb4-0763-4788-b98d-d58198b0add7","Type":"ContainerDied","Data":"f31810308c6128e83c2297f6b5fd09f6660da7129cbb5699a5bcdd5393731a2e"} Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.870646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d44dff674-gspmf" event={"ID":"66165239-6c53-469e-97c0-fbcec87f22d3","Type":"ContainerDied","Data":"db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210"} Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.870661 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d44dff674-gspmf" event={"ID":"66165239-6c53-469e-97c0-fbcec87f22d3","Type":"ContainerDied","Data":"813194da63759f0f38b10ff211392d310905ab3b48229dd0e57c77676846f0df"} Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.870678 4687 scope.go:117] "RemoveContainer" containerID="db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.886051 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.886073 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.898529 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.914928 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "66165239-6c53-469e-97c0-fbcec87f22d3" (UID: "66165239-6c53-469e-97c0-fbcec87f22d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.929188 4687 scope.go:117] "RemoveContainer" containerID="bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.969824 4687 scope.go:117] "RemoveContainer" containerID="db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210" Mar 12 16:26:03 crc kubenswrapper[4687]: E0312 16:26:03.970431 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210\": container with ID starting with db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210 not found: ID does not exist" containerID="db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.970470 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210"} err="failed to get container status \"db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210\": rpc error: code = NotFound desc = could not find container \"db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210\": container with ID starting with db55f2589134c216189e8d5bed5111b289f9f7cc5a294d6866bcf193b31e3210 not found: ID does not exist" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.970501 4687 scope.go:117] "RemoveContainer" containerID="bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8" Mar 12 16:26:03 crc kubenswrapper[4687]: E0312 16:26:03.975650 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8\": container with ID starting with bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8 not found: ID does not exist" containerID="bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.975695 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8"} err="failed to get container status \"bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8\": rpc error: code = NotFound desc = could not find container \"bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8\": container with ID starting with bfb1a70b192b3333c466e267d668442db01aa85771fcfd48ef77e181a95ac1b8 not found: ID does not exist" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.988017 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:03 crc kubenswrapper[4687]: I0312 16:26:03.988053 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165239-6c53-469e-97c0-fbcec87f22d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:04 crc kubenswrapper[4687]: I0312 16:26:04.030063 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:26:04 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:26:04 crc kubenswrapper[4687]: > Mar 12 16:26:04 crc kubenswrapper[4687]: I0312 16:26:04.182824 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7d44dff674-gspmf"] Mar 12 16:26:04 crc kubenswrapper[4687]: I0312 16:26:04.207055 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7d44dff674-gspmf"] Mar 12 16:26:04 crc kubenswrapper[4687]: I0312 16:26:04.402160 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 16:26:04 crc kubenswrapper[4687]: I0312 16:26:04.830275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d39f34d0-cde8-47bd-8bfc-929c8cf9de03","Type":"ContainerStarted","Data":"707ffd94f9598391cdde5562d6384a434cea77d26c332091500a9d69445ead23"} Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.290110 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.427983 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/17278eb4-0763-4788-b98d-d58198b0add7-kube-api-access-c82x5\") pod \"17278eb4-0763-4788-b98d-d58198b0add7\" (UID: \"17278eb4-0763-4788-b98d-d58198b0add7\") " Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.434570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17278eb4-0763-4788-b98d-d58198b0add7-kube-api-access-c82x5" (OuterVolumeSpecName: "kube-api-access-c82x5") pod "17278eb4-0763-4788-b98d-d58198b0add7" (UID: "17278eb4-0763-4788-b98d-d58198b0add7"). InnerVolumeSpecName "kube-api-access-c82x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.530645 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c82x5\" (UniqueName: \"kubernetes.io/projected/17278eb4-0763-4788-b98d-d58198b0add7-kube-api-access-c82x5\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.747911 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" path="/var/lib/kubelet/pods/66165239-6c53-469e-97c0-fbcec87f22d3/volumes" Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.845701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" event={"ID":"17278eb4-0763-4788-b98d-d58198b0add7","Type":"ContainerDied","Data":"26b5495add3afacf17d526d3d8401be2dc9168e3901894b4d48ef3b0c4b78940"} Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.845750 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b5495add3afacf17d526d3d8401be2dc9168e3901894b4d48ef3b0c4b78940" Mar 12 16:26:05 crc kubenswrapper[4687]: I0312 16:26:05.845773 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555546-l7bqh" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.382681 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555540-q5b7v"] Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.397292 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555540-q5b7v"] Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.738264 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-66ff7fbdf4-jqph2"] Mar 12 16:26:06 crc kubenswrapper[4687]: E0312 16:26:06.738998 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-log" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.739015 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-log" Mar 12 16:26:06 crc kubenswrapper[4687]: E0312 16:26:06.739028 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-api" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.739035 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-api" Mar 12 16:26:06 crc kubenswrapper[4687]: E0312 16:26:06.739074 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17278eb4-0763-4788-b98d-d58198b0add7" containerName="oc" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.739080 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="17278eb4-0763-4788-b98d-d58198b0add7" containerName="oc" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.739267 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-api" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.739278 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="17278eb4-0763-4788-b98d-d58198b0add7" containerName="oc" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.739301 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="66165239-6c53-469e-97c0-fbcec87f22d3" containerName="placement-log" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.746038 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.749890 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.750127 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9lspd" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.752654 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.768958 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-66ff7fbdf4-jqph2"] Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.863257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-combined-ca-bundle\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.863395 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data-custom\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.863558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.863671 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5mx8\" (UniqueName: \"kubernetes.io/projected/0695d270-8d8d-4148-8782-8d14fb110c88-kube-api-access-j5mx8\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.927950 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rn4lx"] Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.929847 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.951415 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rn4lx"] Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.966185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data-custom\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.966307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.966431 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mx8\" (UniqueName: \"kubernetes.io/projected/0695d270-8d8d-4148-8782-8d14fb110c88-kube-api-access-j5mx8\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.966486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-combined-ca-bundle\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.978389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.979156 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-combined-ca-bundle\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:06 crc kubenswrapper[4687]: I0312 16:26:06.996946 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data-custom\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.000427 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-85c84fc4db-nhqwg"] Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.002054 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.012233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.016672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mx8\" (UniqueName: \"kubernetes.io/projected/0695d270-8d8d-4148-8782-8d14fb110c88-kube-api-access-j5mx8\") pod \"heat-engine-66ff7fbdf4-jqph2\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.046466 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-85c84fc4db-nhqwg"] Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.075925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076039 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-config\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076107 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076126 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6zk\" (UniqueName: \"kubernetes.io/projected/3baf7852-a4e6-4193-a927-c3c652875858-kube-api-access-lk6zk\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076152 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq586\" (UniqueName: \"kubernetes.io/projected/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-kube-api-access-vq586\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076171 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data-custom\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076205 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.076253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-combined-ca-bundle\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.084086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.195488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.195776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.195805 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6zk\" (UniqueName: \"kubernetes.io/projected/3baf7852-a4e6-4193-a927-c3c652875858-kube-api-access-lk6zk\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.195849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq586\" (UniqueName: \"kubernetes.io/projected/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-kube-api-access-vq586\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.195897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data-custom\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.195973 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.196016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.196064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-combined-ca-bundle\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.196212 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.196421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-config\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.197377 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-config\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.197939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.203787 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.208887 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data-custom\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.209505 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.210043 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.232650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.237404 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq586\" (UniqueName: \"kubernetes.io/projected/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-kube-api-access-vq586\") pod \"dnsmasq-dns-7756b9d78c-rn4lx\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.239758 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6zk\" (UniqueName: \"kubernetes.io/projected/3baf7852-a4e6-4193-a927-c3c652875858-kube-api-access-lk6zk\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.251195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-combined-ca-bundle\") pod \"heat-cfnapi-85c84fc4db-nhqwg\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.317486 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-55c84f9f84-km8kl"] Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.319069 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.322999 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.361229 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55c84f9f84-km8kl"] Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.402019 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.409799 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-combined-ca-bundle\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.409902 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lf5\" (UniqueName: \"kubernetes.io/projected/e6244ee9-3e89-4cd1-b6ba-8776b147a062-kube-api-access-x9lf5\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.409938 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data-custom\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.410007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.426290 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.461786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.512771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-combined-ca-bundle\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.512884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9lf5\" (UniqueName: \"kubernetes.io/projected/e6244ee9-3e89-4cd1-b6ba-8776b147a062-kube-api-access-x9lf5\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.512930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data-custom\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.513028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.528727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-combined-ca-bundle\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.540328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.543176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9lf5\" (UniqueName: \"kubernetes.io/projected/e6244ee9-3e89-4cd1-b6ba-8776b147a062-kube-api-access-x9lf5\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.551319 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data-custom\") pod \"heat-api-55c84f9f84-km8kl\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.680659 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.793630 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61faf509-0e75-40f1-b755-c10c266408a9" path="/var/lib/kubelet/pods/61faf509-0e75-40f1-b755-c10c266408a9/volumes" Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.855990 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-66ff7fbdf4-jqph2"] Mar 12 16:26:07 crc kubenswrapper[4687]: W0312 16:26:07.869635 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0695d270_8d8d_4148_8782_8d14fb110c88.slice/crio-eb3e0d6270f393c9d2aff581e200eb7ac1c97f979aff1e4d8d387bcd89ce5563 WatchSource:0}: Error finding container eb3e0d6270f393c9d2aff581e200eb7ac1c97f979aff1e4d8d387bcd89ce5563: Status 404 returned error can't find the container with id eb3e0d6270f393c9d2aff581e200eb7ac1c97f979aff1e4d8d387bcd89ce5563 Mar 12 16:26:07 crc kubenswrapper[4687]: I0312 16:26:07.922171 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-66ff7fbdf4-jqph2" event={"ID":"0695d270-8d8d-4148-8782-8d14fb110c88","Type":"ContainerStarted","Data":"eb3e0d6270f393c9d2aff581e200eb7ac1c97f979aff1e4d8d387bcd89ce5563"} Mar 12 16:26:08 crc kubenswrapper[4687]: W0312 16:26:08.027994 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d4b722_5a0f_4e77_b3de_db84d696b1e4.slice/crio-153d22eb162b485c5cb20de335a897c7b9f66305d78979957fc5a55a155b15e9 WatchSource:0}: Error finding container 153d22eb162b485c5cb20de335a897c7b9f66305d78979957fc5a55a155b15e9: Status 404 returned error can't find the container with id 153d22eb162b485c5cb20de335a897c7b9f66305d78979957fc5a55a155b15e9 Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.061381 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rn4lx"] Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.339997 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-85c84fc4db-nhqwg"] Mar 12 16:26:08 crc kubenswrapper[4687]: W0312 16:26:08.352859 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3baf7852_a4e6_4193_a927_c3c652875858.slice/crio-b8a52baeb0a2aa135c826933e15f2c764716ddbb0506b828e3aa0284ec7adede WatchSource:0}: Error finding container b8a52baeb0a2aa135c826933e15f2c764716ddbb0506b828e3aa0284ec7adede: Status 404 returned error can't find the container with id b8a52baeb0a2aa135c826933e15f2c764716ddbb0506b828e3aa0284ec7adede Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.376967 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55c84f9f84-km8kl"] Mar 12 16:26:08 crc kubenswrapper[4687]: W0312 16:26:08.391698 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6244ee9_3e89_4cd1_b6ba_8776b147a062.slice/crio-49d375f87793781fa854487a75c35902e8c78084ee9ea7558e4395237c3d49a7 WatchSource:0}: Error finding container 49d375f87793781fa854487a75c35902e8c78084ee9ea7558e4395237c3d49a7: Status 404 returned error can't find the container with id 49d375f87793781fa854487a75c35902e8c78084ee9ea7558e4395237c3d49a7 Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.805238 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7b9d7fc5b5-76d88"] Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.809404 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.814170 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.814290 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.814329 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.823238 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b9d7fc5b5-76d88"] Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859538 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-combined-ca-bundle\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-etc-swift\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2r8\" (UniqueName: \"kubernetes.io/projected/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-kube-api-access-4z2r8\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-run-httpd\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-config-data\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-public-tls-certs\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859914 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-internal-tls-certs\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.859992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-log-httpd\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.953768 4687 generic.go:334] "Generic (PLEG): container finished" podID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerID="b33987c10d5e834a44105fcd07e880c3bc43048fc4c575a4d77f9b8ff0b312a8" exitCode=0 Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.953841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" event={"ID":"d3d4b722-5a0f-4e77-b3de-db84d696b1e4","Type":"ContainerDied","Data":"b33987c10d5e834a44105fcd07e880c3bc43048fc4c575a4d77f9b8ff0b312a8"} Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.953869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" event={"ID":"d3d4b722-5a0f-4e77-b3de-db84d696b1e4","Type":"ContainerStarted","Data":"153d22eb162b485c5cb20de335a897c7b9f66305d78979957fc5a55a155b15e9"} Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.962491 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-run-httpd\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.962806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-config-data\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.962897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-public-tls-certs\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.963001 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-internal-tls-certs\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.963154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-run-httpd\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.966099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-log-httpd\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.966458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-combined-ca-bundle\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.966720 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-etc-swift\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.967027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2r8\" (UniqueName: \"kubernetes.io/projected/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-kube-api-access-4z2r8\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.968198 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55c84f9f84-km8kl" event={"ID":"e6244ee9-3e89-4cd1-b6ba-8776b147a062","Type":"ContainerStarted","Data":"49d375f87793781fa854487a75c35902e8c78084ee9ea7558e4395237c3d49a7"} Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.972630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-public-tls-certs\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.972748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-log-httpd\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.972771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-config-data\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.975386 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-combined-ca-bundle\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.979698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" event={"ID":"3baf7852-a4e6-4193-a927-c3c652875858","Type":"ContainerStarted","Data":"b8a52baeb0a2aa135c826933e15f2c764716ddbb0506b828e3aa0284ec7adede"} Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.984910 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-internal-tls-certs\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.996244 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-66ff7fbdf4-jqph2" event={"ID":"0695d270-8d8d-4148-8782-8d14fb110c88","Type":"ContainerStarted","Data":"152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67"} Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.998047 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-etc-swift\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:08 crc kubenswrapper[4687]: I0312 16:26:08.998136 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:09 crc kubenswrapper[4687]: I0312 16:26:09.006837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2r8\" (UniqueName: \"kubernetes.io/projected/0bbd130e-9a81-466f-8d89-79c2fa5fdc4c-kube-api-access-4z2r8\") pod \"swift-proxy-7b9d7fc5b5-76d88\" (UID: \"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c\") " pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:09 crc kubenswrapper[4687]: I0312 16:26:09.110390 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-66ff7fbdf4-jqph2" podStartSLOduration=3.110349888 podStartE2EDuration="3.110349888s" podCreationTimestamp="2026-03-12 16:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:09.023745345 +0000 UTC m=+1417.987707689" watchObservedRunningTime="2026-03-12 16:26:09.110349888 +0000 UTC m=+1418.074312232" Mar 12 16:26:09 crc kubenswrapper[4687]: I0312 16:26:09.170635 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:09 crc kubenswrapper[4687]: I0312 16:26:09.831141 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b9d7fc5b5-76d88"] Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.010675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" event={"ID":"d3d4b722-5a0f-4e77-b3de-db84d696b1e4","Type":"ContainerStarted","Data":"4ad76e6ccbecde7321398d99d746f5c31d0cd22a82d29667e2e3399f8a7e2f9f"} Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.010813 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.012076 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" event={"ID":"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c","Type":"ContainerStarted","Data":"b238a6fd5620124ca0d426e32955d07724fb91fcf33d4be88b30a960790a7c14"} Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.031480 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" podStartSLOduration=4.03145835 podStartE2EDuration="4.03145835s" podCreationTimestamp="2026-03-12 16:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:10.028604442 +0000 UTC m=+1418.992566796" watchObservedRunningTime="2026-03-12 16:26:10.03145835 +0000 UTC m=+1418.995420694" Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.336304 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.337037 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-httpd" containerID="cri-o://1661c28baa01e9cfaa3aa820ae239bdea1de27b52ffe4d86baf84f587eef3221" gracePeriod=30 Mar 12 16:26:10 crc kubenswrapper[4687]: I0312 16:26:10.340133 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-log" containerID="cri-o://97fd5a5ead64e55a87cb413325440d309964310bd709e90795867c2632966c50" gracePeriod=30 Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.069079 4687 generic.go:334] "Generic (PLEG): container finished" podID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerID="97fd5a5ead64e55a87cb413325440d309964310bd709e90795867c2632966c50" exitCode=143 Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.069501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0","Type":"ContainerDied","Data":"97fd5a5ead64e55a87cb413325440d309964310bd709e90795867c2632966c50"} Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.840575 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.841144 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-central-agent" containerID="cri-o://d57f4f4a474f131d79e1647278f27a7542bd93372cf57db6ca10c510fe10a983" gracePeriod=30 Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.845594 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="proxy-httpd" containerID="cri-o://4895ce23a2fe4db3735761d6e59f63cf0e585dc8c22332278dba4929777baa34" gracePeriod=30 Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.846201 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="sg-core" containerID="cri-o://efe86ff2a947f9dcedb7535696abd5fafdffdbd2960292efdaf78693978ecf69" gracePeriod=30 Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.846340 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-notification-agent" containerID="cri-o://59fc4f1a8357d53e70565ddcea7eb11ffe8658f946eb10831e6ab53752487f24" gracePeriod=30 Mar 12 16:26:11 crc kubenswrapper[4687]: I0312 16:26:11.886710 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.080498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55c84f9f84-km8kl" event={"ID":"e6244ee9-3e89-4cd1-b6ba-8776b147a062","Type":"ContainerStarted","Data":"88dd2cc69ed4301625f753462469f2088b642b439fbf6b91b36f78a5945de6bd"} Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.080712 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.084006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" event={"ID":"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c","Type":"ContainerStarted","Data":"3c3540b522fa85cd8cd798ba9eb1debb8b369a335194df88b2b43dfc2cac285d"} Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.085823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" event={"ID":"3baf7852-a4e6-4193-a927-c3c652875858","Type":"ContainerStarted","Data":"72d790fa62ad0da592c9ac6247d17353272c3304181b8062f59d242f8cec6da6"} Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.085951 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.088881 4687 generic.go:334] "Generic (PLEG): container finished" podID="70b6d42e-354f-4b20-bbce-8083586b8630" containerID="4895ce23a2fe4db3735761d6e59f63cf0e585dc8c22332278dba4929777baa34" exitCode=0 Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.088903 4687 generic.go:334] "Generic (PLEG): container finished" podID="70b6d42e-354f-4b20-bbce-8083586b8630" containerID="efe86ff2a947f9dcedb7535696abd5fafdffdbd2960292efdaf78693978ecf69" exitCode=2 Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.088920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerDied","Data":"4895ce23a2fe4db3735761d6e59f63cf0e585dc8c22332278dba4929777baa34"} Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.088939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerDied","Data":"efe86ff2a947f9dcedb7535696abd5fafdffdbd2960292efdaf78693978ecf69"} Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.108759 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-55c84f9f84-km8kl" podStartSLOduration=2.1596007520000002 podStartE2EDuration="5.108741468s" podCreationTimestamp="2026-03-12 16:26:07 +0000 UTC" firstStartedPulling="2026-03-12 16:26:08.397688033 +0000 UTC m=+1417.361650377" lastFinishedPulling="2026-03-12 16:26:11.346828749 +0000 UTC m=+1420.310791093" observedRunningTime="2026-03-12 16:26:12.104196614 +0000 UTC m=+1421.068158958" watchObservedRunningTime="2026-03-12 16:26:12.108741468 +0000 UTC m=+1421.072703812" Mar 12 16:26:12 crc kubenswrapper[4687]: I0312 16:26:12.131010 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" podStartSLOduration=3.15113452 podStartE2EDuration="6.130989405s" podCreationTimestamp="2026-03-12 16:26:06 +0000 UTC" firstStartedPulling="2026-03-12 16:26:08.365588157 +0000 UTC m=+1417.329550501" lastFinishedPulling="2026-03-12 16:26:11.345443042 +0000 UTC m=+1420.309405386" observedRunningTime="2026-03-12 16:26:12.118679979 +0000 UTC m=+1421.082642343" watchObservedRunningTime="2026-03-12 16:26:12.130989405 +0000 UTC m=+1421.094951749" Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.106509 4687 generic.go:334] "Generic (PLEG): container finished" podID="70b6d42e-354f-4b20-bbce-8083586b8630" containerID="59fc4f1a8357d53e70565ddcea7eb11ffe8658f946eb10831e6ab53752487f24" exitCode=0 Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.106858 4687 generic.go:334] "Generic (PLEG): container finished" podID="70b6d42e-354f-4b20-bbce-8083586b8630" containerID="d57f4f4a474f131d79e1647278f27a7542bd93372cf57db6ca10c510fe10a983" exitCode=0 Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.106607 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerDied","Data":"59fc4f1a8357d53e70565ddcea7eb11ffe8658f946eb10831e6ab53752487f24"} Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.106952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerDied","Data":"d57f4f4a474f131d79e1647278f27a7542bd93372cf57db6ca10c510fe10a983"} Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.526224 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.527617 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-log" containerID="cri-o://8b424bbb68f36f99a42bfe10ee37f0b99bf3b64c1459d6ddb6162a8173d24192" gracePeriod=30 Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.527690 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-httpd" containerID="cri-o://f7ded872c83d08118a23a01f5b2fa960a3a7411e0e4f4156b7d0a79aaab369ac" gracePeriod=30 Mar 12 16:26:13 crc kubenswrapper[4687]: I0312 16:26:13.986660 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" probeResult="failure" output=< Mar 12 16:26:13 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:26:13 crc kubenswrapper[4687]: > Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.120004 4687 generic.go:334] "Generic (PLEG): container finished" podID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerID="8b424bbb68f36f99a42bfe10ee37f0b99bf3b64c1459d6ddb6162a8173d24192" exitCode=143 Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.120056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6d81f6b-475e-4a5b-9fd8-006856dd645d","Type":"ContainerDied","Data":"8b424bbb68f36f99a42bfe10ee37f0b99bf3b64c1459d6ddb6162a8173d24192"} Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.122639 4687 generic.go:334] "Generic (PLEG): container finished" podID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerID="1661c28baa01e9cfaa3aa820ae239bdea1de27b52ffe4d86baf84f587eef3221" exitCode=0 Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.122690 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0","Type":"ContainerDied","Data":"1661c28baa01e9cfaa3aa820ae239bdea1de27b52ffe4d86baf84f587eef3221"} Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.345431 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7764695775-jnw2l"] Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.347031 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.367876 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-67996fb69c-bczjq"] Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.369544 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.432675 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-765f6bdcdf-s57mt"] Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.434477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445367 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445415 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data-custom\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-combined-ca-bundle\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445656 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data-custom\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qdh\" (UniqueName: \"kubernetes.io/projected/a01a9875-3fca-4c27-81e1-e052629b04ea-kube-api-access-r8qdh\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445877 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-combined-ca-bundle\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.445957 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5mt\" (UniqueName: \"kubernetes.io/projected/03d64643-db52-410e-a678-980d5030356e-kube-api-access-2k5mt\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.446859 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7764695775-jnw2l"] Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.465866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67996fb69c-bczjq"] Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.480669 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-765f6bdcdf-s57mt"] Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qdh\" (UniqueName: \"kubernetes.io/projected/a01a9875-3fca-4c27-81e1-e052629b04ea-kube-api-access-r8qdh\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552382 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data-custom\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-combined-ca-bundle\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5mt\" (UniqueName: \"kubernetes.io/projected/03d64643-db52-410e-a678-980d5030356e-kube-api-access-2k5mt\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data-custom\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552663 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-combined-ca-bundle\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552759 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6p8l\" (UniqueName: \"kubernetes.io/projected/481ee492-9331-4038-a70b-2fc4eddfb60f-kube-api-access-x6p8l\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-combined-ca-bundle\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552838 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.552955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data-custom\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.560549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data-custom\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.561150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.562890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-combined-ca-bundle\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.566200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-combined-ca-bundle\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.576873 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data-custom\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.579897 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qdh\" (UniqueName: \"kubernetes.io/projected/a01a9875-3fca-4c27-81e1-e052629b04ea-kube-api-access-r8qdh\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.580572 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5mt\" (UniqueName: \"kubernetes.io/projected/03d64643-db52-410e-a678-980d5030356e-kube-api-access-2k5mt\") pod \"heat-api-7764695775-jnw2l\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.584702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data\") pod \"heat-cfnapi-67996fb69c-bczjq\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.657175 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6p8l\" (UniqueName: \"kubernetes.io/projected/481ee492-9331-4038-a70b-2fc4eddfb60f-kube-api-access-x6p8l\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.657431 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-combined-ca-bundle\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.657451 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.657559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data-custom\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.665421 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.666002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data-custom\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.667142 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-combined-ca-bundle\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.672816 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.683430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6p8l\" (UniqueName: \"kubernetes.io/projected/481ee492-9331-4038-a70b-2fc4eddfb60f-kube-api-access-x6p8l\") pod \"heat-engine-765f6bdcdf-s57mt\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.698415 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:14 crc kubenswrapper[4687]: I0312 16:26:14.767275 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.851995 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-55c84f9f84-km8kl"] Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.852628 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-55c84f9f84-km8kl" podUID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" containerName="heat-api" containerID="cri-o://88dd2cc69ed4301625f753462469f2088b642b439fbf6b91b36f78a5945de6bd" gracePeriod=60 Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.883403 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-85c84fc4db-nhqwg"] Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.883637 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" podUID="3baf7852-a4e6-4193-a927-c3c652875858" containerName="heat-cfnapi" containerID="cri-o://72d790fa62ad0da592c9ac6247d17353272c3304181b8062f59d242f8cec6da6" gracePeriod=60 Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.891205 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-67fdc58659-zfnqd"] Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.892955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.895208 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.895388 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.900308 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67fdc58659-zfnqd"] Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.950268 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f74959f89-ptk6f"] Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.951811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.965458 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.965556 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991161 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991212 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-internal-tls-certs\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991290 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-internal-tls-certs\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-combined-ca-bundle\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991353 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-public-tls-certs\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991447 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-combined-ca-bundle\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991464 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data-custom\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991520 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvpc\" (UniqueName: \"kubernetes.io/projected/c8a3074c-212f-414d-b52b-69c5f0a9071b-kube-api-access-qcvpc\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data-custom\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-public-tls-certs\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:15 crc kubenswrapper[4687]: I0312 16:26:15.991610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm866\" (UniqueName: \"kubernetes.io/projected/f99b8239-d236-4756-ab84-03b17e79ce5b-kube-api-access-zm866\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.003170 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f74959f89-ptk6f"] Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.093959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-combined-ca-bundle\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094034 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-public-tls-certs\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data-custom\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094116 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-combined-ca-bundle\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094155 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094206 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcvpc\" (UniqueName: \"kubernetes.io/projected/c8a3074c-212f-414d-b52b-69c5f0a9071b-kube-api-access-qcvpc\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data-custom\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-public-tls-certs\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094321 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm866\" (UniqueName: \"kubernetes.io/projected/f99b8239-d236-4756-ab84-03b17e79ce5b-kube-api-access-zm866\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-internal-tls-certs\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.094542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-internal-tls-certs\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.103844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-public-tls-certs\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.107235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-combined-ca-bundle\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.108240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-combined-ca-bundle\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.114211 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-internal-tls-certs\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.114382 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.114686 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data-custom\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.114884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-internal-tls-certs\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.114879 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-public-tls-certs\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.115727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data-custom\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.123013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm866\" (UniqueName: \"kubernetes.io/projected/f99b8239-d236-4756-ab84-03b17e79ce5b-kube-api-access-zm866\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.123061 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data\") pod \"heat-cfnapi-7f74959f89-ptk6f\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.127207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcvpc\" (UniqueName: \"kubernetes.io/projected/c8a3074c-212f-414d-b52b-69c5f0a9071b-kube-api-access-qcvpc\") pod \"heat-api-67fdc58659-zfnqd\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.287964 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.304061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.422391 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-874865cfc-trxxb" Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.487408 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68cdbd957f-bt7lg"] Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.487824 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68cdbd957f-bt7lg" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-api" containerID="cri-o://01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3" gracePeriod=30 Mar 12 16:26:16 crc kubenswrapper[4687]: I0312 16:26:16.488272 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68cdbd957f-bt7lg" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-httpd" containerID="cri-o://3ddc8db5928616656f2964d46677648cda6f9c7ffd840ee20ebb88fb57e174fb" gracePeriod=30 Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.161227 4687 generic.go:334] "Generic (PLEG): container finished" podID="3baf7852-a4e6-4193-a927-c3c652875858" containerID="72d790fa62ad0da592c9ac6247d17353272c3304181b8062f59d242f8cec6da6" exitCode=0 Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.161279 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" event={"ID":"3baf7852-a4e6-4193-a927-c3c652875858","Type":"ContainerDied","Data":"72d790fa62ad0da592c9ac6247d17353272c3304181b8062f59d242f8cec6da6"} Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.165003 4687 generic.go:334] "Generic (PLEG): container finished" podID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerID="3ddc8db5928616656f2964d46677648cda6f9c7ffd840ee20ebb88fb57e174fb" exitCode=0 Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.165068 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cdbd957f-bt7lg" event={"ID":"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1","Type":"ContainerDied","Data":"3ddc8db5928616656f2964d46677648cda6f9c7ffd840ee20ebb88fb57e174fb"} Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.167412 4687 generic.go:334] "Generic (PLEG): container finished" podID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerID="f7ded872c83d08118a23a01f5b2fa960a3a7411e0e4f4156b7d0a79aaab369ac" exitCode=0 Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.167477 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6d81f6b-475e-4a5b-9fd8-006856dd645d","Type":"ContainerDied","Data":"f7ded872c83d08118a23a01f5b2fa960a3a7411e0e4f4156b7d0a79aaab369ac"} Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.169508 4687 generic.go:334] "Generic (PLEG): container finished" podID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" containerID="88dd2cc69ed4301625f753462469f2088b642b439fbf6b91b36f78a5945de6bd" exitCode=0 Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.169572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55c84f9f84-km8kl" event={"ID":"e6244ee9-3e89-4cd1-b6ba-8776b147a062","Type":"ContainerDied","Data":"88dd2cc69ed4301625f753462469f2088b642b439fbf6b91b36f78a5945de6bd"} Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.404517 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.464780 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" podUID="3baf7852-a4e6-4193-a927-c3c652875858" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.483543 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2msn7"] Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.483876 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="dnsmasq-dns" containerID="cri-o://1ec6a3b82a323a87dfab248865326c5a035e5f1581ada9e4164e13cf4c824a40" gracePeriod=10 Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.685559 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-55c84f9f84-km8kl" podUID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.220:8004/healthcheck\": dial tcp 10.217.0.220:8004: connect: connection refused" Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.920868 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dbdrp"] Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.922745 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:17 crc kubenswrapper[4687]: I0312 16:26:17.948516 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dbdrp"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.021525 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.209:5353: connect: connection refused" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.075000 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b16131f-3ac7-4721-ae74-8f7c937a3fec-operator-scripts\") pod \"nova-api-db-create-dbdrp\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.075133 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblxt\" (UniqueName: \"kubernetes.io/projected/9b16131f-3ac7-4721-ae74-8f7c937a3fec-kube-api-access-tblxt\") pod \"nova-api-db-create-dbdrp\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.109220 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6qfnd"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.110798 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.126601 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9891-account-create-update-lmtdz"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.128221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.134687 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.139703 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6qfnd"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.154848 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9891-account-create-update-lmtdz"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.189007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzhq\" (UniqueName: \"kubernetes.io/projected/53e2c288-c849-482c-97e9-671307534961-kube-api-access-2rzhq\") pod \"nova-cell0-db-create-6qfnd\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.189230 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b16131f-3ac7-4721-ae74-8f7c937a3fec-operator-scripts\") pod \"nova-api-db-create-dbdrp\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.189488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblxt\" (UniqueName: \"kubernetes.io/projected/9b16131f-3ac7-4721-ae74-8f7c937a3fec-kube-api-access-tblxt\") pod \"nova-api-db-create-dbdrp\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.189548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e2c288-c849-482c-97e9-671307534961-operator-scripts\") pod \"nova-cell0-db-create-6qfnd\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.190265 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b16131f-3ac7-4721-ae74-8f7c937a3fec-operator-scripts\") pod \"nova-api-db-create-dbdrp\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.194269 4687 generic.go:334] "Generic (PLEG): container finished" podID="05c97834-0ab6-445e-9dd7-61e01484b052" containerID="1ec6a3b82a323a87dfab248865326c5a035e5f1581ada9e4164e13cf4c824a40" exitCode=0 Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.194307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" event={"ID":"05c97834-0ab6-445e-9dd7-61e01484b052","Type":"ContainerDied","Data":"1ec6a3b82a323a87dfab248865326c5a035e5f1581ada9e4164e13cf4c824a40"} Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.224498 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6627h"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.226436 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.235779 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6627h"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.245695 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblxt\" (UniqueName: \"kubernetes.io/projected/9b16131f-3ac7-4721-ae74-8f7c937a3fec-kube-api-access-tblxt\") pod \"nova-api-db-create-dbdrp\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.256901 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.294514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e2c288-c849-482c-97e9-671307534961-operator-scripts\") pod \"nova-cell0-db-create-6qfnd\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.294573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qgk\" (UniqueName: \"kubernetes.io/projected/652d0f48-b79d-4be2-a971-735297c4c3d6-kube-api-access-l4qgk\") pod \"nova-api-9891-account-create-update-lmtdz\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.294616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/652d0f48-b79d-4be2-a971-735297c4c3d6-operator-scripts\") pod \"nova-api-9891-account-create-update-lmtdz\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.294979 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rzhq\" (UniqueName: \"kubernetes.io/projected/53e2c288-c849-482c-97e9-671307534961-kube-api-access-2rzhq\") pod \"nova-cell0-db-create-6qfnd\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.296587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e2c288-c849-482c-97e9-671307534961-operator-scripts\") pod \"nova-cell0-db-create-6qfnd\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.321867 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-13ba-account-create-update-mjkr5"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.324094 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.326138 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rzhq\" (UniqueName: \"kubernetes.io/projected/53e2c288-c849-482c-97e9-671307534961-kube-api-access-2rzhq\") pod \"nova-cell0-db-create-6qfnd\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.326420 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.334935 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-13ba-account-create-update-mjkr5"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.398802 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjn4\" (UniqueName: \"kubernetes.io/projected/3221bf1a-072f-43e5-add9-4b21a6145692-kube-api-access-ldjn4\") pod \"nova-cell1-db-create-6627h\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.399072 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-operator-scripts\") pod \"nova-cell0-13ba-account-create-update-mjkr5\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.399164 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3221bf1a-072f-43e5-add9-4b21a6145692-operator-scripts\") pod \"nova-cell1-db-create-6627h\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.399296 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4qgk\" (UniqueName: \"kubernetes.io/projected/652d0f48-b79d-4be2-a971-735297c4c3d6-kube-api-access-l4qgk\") pod \"nova-api-9891-account-create-update-lmtdz\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.399322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/652d0f48-b79d-4be2-a971-735297c4c3d6-operator-scripts\") pod \"nova-api-9891-account-create-update-lmtdz\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.399373 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtz8\" (UniqueName: \"kubernetes.io/projected/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-kube-api-access-7gtz8\") pod \"nova-cell0-13ba-account-create-update-mjkr5\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.402753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/652d0f48-b79d-4be2-a971-735297c4c3d6-operator-scripts\") pod \"nova-api-9891-account-create-update-lmtdz\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.414622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4qgk\" (UniqueName: \"kubernetes.io/projected/652d0f48-b79d-4be2-a971-735297c4c3d6-kube-api-access-l4qgk\") pod \"nova-api-9891-account-create-update-lmtdz\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.427559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.447264 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.501620 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtz8\" (UniqueName: \"kubernetes.io/projected/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-kube-api-access-7gtz8\") pod \"nova-cell0-13ba-account-create-update-mjkr5\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.501672 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldjn4\" (UniqueName: \"kubernetes.io/projected/3221bf1a-072f-43e5-add9-4b21a6145692-kube-api-access-ldjn4\") pod \"nova-cell1-db-create-6627h\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.501723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-operator-scripts\") pod \"nova-cell0-13ba-account-create-update-mjkr5\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.501809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3221bf1a-072f-43e5-add9-4b21a6145692-operator-scripts\") pod \"nova-cell1-db-create-6627h\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.502674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-operator-scripts\") pod \"nova-cell0-13ba-account-create-update-mjkr5\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.502751 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3221bf1a-072f-43e5-add9-4b21a6145692-operator-scripts\") pod \"nova-cell1-db-create-6627h\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.532442 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1bf9-account-create-update-zk5jd"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.532728 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtz8\" (UniqueName: \"kubernetes.io/projected/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-kube-api-access-7gtz8\") pod \"nova-cell0-13ba-account-create-update-mjkr5\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.532815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldjn4\" (UniqueName: \"kubernetes.io/projected/3221bf1a-072f-43e5-add9-4b21a6145692-kube-api-access-ldjn4\") pod \"nova-cell1-db-create-6627h\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.546095 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.548655 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.561705 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1bf9-account-create-update-zk5jd"] Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.690005 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.707513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8959ad5d-c828-4dcf-993f-4225e02fa8ff-operator-scripts\") pod \"nova-cell1-1bf9-account-create-update-zk5jd\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.707992 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.708608 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xrh\" (UniqueName: \"kubernetes.io/projected/8959ad5d-c828-4dcf-993f-4225e02fa8ff-kube-api-access-p2xrh\") pod \"nova-cell1-1bf9-account-create-update-zk5jd\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.811647 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xrh\" (UniqueName: \"kubernetes.io/projected/8959ad5d-c828-4dcf-993f-4225e02fa8ff-kube-api-access-p2xrh\") pod \"nova-cell1-1bf9-account-create-update-zk5jd\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.811744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8959ad5d-c828-4dcf-993f-4225e02fa8ff-operator-scripts\") pod \"nova-cell1-1bf9-account-create-update-zk5jd\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.812718 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8959ad5d-c828-4dcf-993f-4225e02fa8ff-operator-scripts\") pod \"nova-cell1-1bf9-account-create-update-zk5jd\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.835101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xrh\" (UniqueName: \"kubernetes.io/projected/8959ad5d-c828-4dcf-993f-4225e02fa8ff-kube-api-access-p2xrh\") pod \"nova-cell1-1bf9-account-create-update-zk5jd\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.854242 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.212:3000/\": dial tcp 10.217.0.212:3000: connect: connection refused" Mar 12 16:26:18 crc kubenswrapper[4687]: I0312 16:26:18.961341 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:19 crc kubenswrapper[4687]: E0312 16:26:19.686103 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4eb1a9_fe2f_4ec6_9894_359bb9eed9e1.slice/crio-01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4eb1a9_fe2f_4ec6_9894_359bb9eed9e1.slice/crio-conmon-01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:26:20 crc kubenswrapper[4687]: I0312 16:26:20.226294 4687 generic.go:334] "Generic (PLEG): container finished" podID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerID="01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3" exitCode=0 Mar 12 16:26:20 crc kubenswrapper[4687]: I0312 16:26:20.226417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cdbd957f-bt7lg" event={"ID":"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1","Type":"ContainerDied","Data":"01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.112183 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.118559 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211296 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-run-httpd\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211400 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-config-data\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211438 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data\") pod \"3baf7852-a4e6-4193-a927-c3c652875858\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211485 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-combined-ca-bundle\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211557 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-combined-ca-bundle\") pod \"3baf7852-a4e6-4193-a927-c3c652875858\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211607 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5nrd\" (UniqueName: \"kubernetes.io/projected/70b6d42e-354f-4b20-bbce-8083586b8630-kube-api-access-v5nrd\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211677 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-scripts\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211695 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-sg-core-conf-yaml\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211717 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6zk\" (UniqueName: \"kubernetes.io/projected/3baf7852-a4e6-4193-a927-c3c652875858-kube-api-access-lk6zk\") pod \"3baf7852-a4e6-4193-a927-c3c652875858\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211777 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-log-httpd\") pod \"70b6d42e-354f-4b20-bbce-8083586b8630\" (UID: \"70b6d42e-354f-4b20-bbce-8083586b8630\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211794 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.211810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data-custom\") pod \"3baf7852-a4e6-4193-a927-c3c652875858\" (UID: \"3baf7852-a4e6-4193-a927-c3c652875858\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.213562 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.214873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.218556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b6d42e-354f-4b20-bbce-8083586b8630-kube-api-access-v5nrd" (OuterVolumeSpecName: "kube-api-access-v5nrd") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "kube-api-access-v5nrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.222661 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-scripts" (OuterVolumeSpecName: "scripts") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.258598 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3baf7852-a4e6-4193-a927-c3c652875858" (UID: "3baf7852-a4e6-4193-a927-c3c652875858"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.261547 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baf7852-a4e6-4193-a927-c3c652875858-kube-api-access-lk6zk" (OuterVolumeSpecName: "kube-api-access-lk6zk") pod "3baf7852-a4e6-4193-a927-c3c652875858" (UID: "3baf7852-a4e6-4193-a927-c3c652875858"). InnerVolumeSpecName "kube-api-access-lk6zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.265486 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.265962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85c84fc4db-nhqwg" event={"ID":"3baf7852-a4e6-4193-a927-c3c652875858","Type":"ContainerDied","Data":"b8a52baeb0a2aa135c826933e15f2c764716ddbb0506b828e3aa0284ec7adede"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.266089 4687 scope.go:117] "RemoveContainer" containerID="72d790fa62ad0da592c9ac6247d17353272c3304181b8062f59d242f8cec6da6" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.271812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68cdbd957f-bt7lg" event={"ID":"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1","Type":"ContainerDied","Data":"cd2f0629ca3a46d87cc9cbd29b1a0c7a100680cce5184d0e9be1006921abaa87"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.271851 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2f0629ca3a46d87cc9cbd29b1a0c7a100680cce5184d0e9be1006921abaa87" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.275168 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.275178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70b6d42e-354f-4b20-bbce-8083586b8630","Type":"ContainerDied","Data":"8dc84e0286b083d2bca890dd532ac8adad72d49a81e2252ea4fe641f1c521946"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.281193 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0","Type":"ContainerDied","Data":"b940cd7b21acfb934dff9664bf4ceaa37227d75fd28120774131b6630ec0f74b"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.281230 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b940cd7b21acfb934dff9664bf4ceaa37227d75fd28120774131b6630ec0f74b" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.284959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55c84f9f84-km8kl" event={"ID":"e6244ee9-3e89-4cd1-b6ba-8776b147a062","Type":"ContainerDied","Data":"49d375f87793781fa854487a75c35902e8c78084ee9ea7558e4395237c3d49a7"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.285587 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49d375f87793781fa854487a75c35902e8c78084ee9ea7558e4395237c3d49a7" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.287646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" event={"ID":"05c97834-0ab6-445e-9dd7-61e01484b052","Type":"ContainerDied","Data":"538b69710c0985528030f885fded691583b86ef757298a7a5a5277629dcaf1c6"} Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.287769 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538b69710c0985528030f885fded691583b86ef757298a7a5a5277629dcaf1c6" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.319801 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk6zk\" (UniqueName: \"kubernetes.io/projected/3baf7852-a4e6-4193-a927-c3c652875858-kube-api-access-lk6zk\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.319966 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b6d42e-354f-4b20-bbce-8083586b8630-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.320046 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.320106 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5nrd\" (UniqueName: \"kubernetes.io/projected/70b6d42e-354f-4b20-bbce-8083586b8630-kube-api-access-v5nrd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.320158 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.365509 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.388341 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3baf7852-a4e6-4193-a927-c3c652875858" (UID: "3baf7852-a4e6-4193-a927-c3c652875858"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.418869 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data" (OuterVolumeSpecName: "config-data") pod "3baf7852-a4e6-4193-a927-c3c652875858" (UID: "3baf7852-a4e6-4193-a927-c3c652875858"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.423093 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.423121 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3baf7852-a4e6-4193-a927-c3c652875858-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.423131 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.460919 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.506589 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.511270 4687 scope.go:117] "RemoveContainer" containerID="4895ce23a2fe4db3735761d6e59f63cf0e585dc8c22332278dba4929777baa34" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.518518 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.527377 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.536507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-config-data" (OuterVolumeSpecName: "config-data") pod "70b6d42e-354f-4b20-bbce-8083586b8630" (UID: "70b6d42e-354f-4b20-bbce-8083586b8630"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: W0312 16:26:22.540149 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8a3074c_212f_414d_b52b_69c5f0a9071b.slice/crio-9ba3024609fa09a241830dea05903c13664f8a61264d99c75611fd0d72e370f9 WatchSource:0}: Error finding container 9ba3024609fa09a241830dea05903c13664f8a61264d99c75611fd0d72e370f9: Status 404 returned error can't find the container with id 9ba3024609fa09a241830dea05903c13664f8a61264d99c75611fd0d72e370f9 Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.549138 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.554226 4687 scope.go:117] "RemoveContainer" containerID="efe86ff2a947f9dcedb7535696abd5fafdffdbd2960292efdaf78693978ecf69" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.554481 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.564375 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67fdc58659-zfnqd"] Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.618875 4687 scope.go:117] "RemoveContainer" containerID="59fc4f1a8357d53e70565ddcea7eb11ffe8658f946eb10831e6ab53752487f24" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.628588 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data-custom\") pod \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.629272 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-config-data\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.629305 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-combined-ca-bundle\") pod \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.632070 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9lf5\" (UniqueName: \"kubernetes.io/projected/e6244ee9-3e89-4cd1-b6ba-8776b147a062-kube-api-access-x9lf5\") pod \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.632132 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-scripts\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.646762 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.647046 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-scripts" (OuterVolumeSpecName: "scripts") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649259 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdkt\" (UniqueName: \"kubernetes.io/projected/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-kube-api-access-wzdkt\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649313 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-logs\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649351 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-config\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649402 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-internal-tls-certs\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649430 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-httpd-config\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-combined-ca-bundle\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bp4p\" (UniqueName: \"kubernetes.io/projected/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-kube-api-access-7bp4p\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649955 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data\") pod \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\" (UID: \"e6244ee9-3e89-4cd1-b6ba-8776b147a062\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.649984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-httpd-run\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.650029 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-ovndb-tls-certs\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.650047 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-internal-tls-certs\") pod \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\" (UID: \"9fed2d6b-94a4-4b88-a539-9b3deffeb5e0\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.651594 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b6d42e-354f-4b20-bbce-8083586b8630-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.651614 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.655821 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-logs" (OuterVolumeSpecName: "logs") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.655987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.657711 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e6244ee9-3e89-4cd1-b6ba-8776b147a062" (UID: "e6244ee9-3e89-4cd1-b6ba-8776b147a062"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.660952 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6244ee9-3e89-4cd1-b6ba-8776b147a062-kube-api-access-x9lf5" (OuterVolumeSpecName: "kube-api-access-x9lf5") pod "e6244ee9-3e89-4cd1-b6ba-8776b147a062" (UID: "e6244ee9-3e89-4cd1-b6ba-8776b147a062"). InnerVolumeSpecName "kube-api-access-x9lf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.685338 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-kube-api-access-wzdkt" (OuterVolumeSpecName: "kube-api-access-wzdkt") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "kube-api-access-wzdkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.690757 4687 scope.go:117] "RemoveContainer" containerID="d57f4f4a474f131d79e1647278f27a7542bd93372cf57db6ca10c510fe10a983" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.696119 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-kube-api-access-7bp4p" (OuterVolumeSpecName: "kube-api-access-7bp4p") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "kube-api-access-7bp4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.697588 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db" (OuterVolumeSpecName: "glance") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.698386 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.701453 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-85c84fc4db-nhqwg"] Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.711437 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6244ee9-3e89-4cd1-b6ba-8776b147a062" (UID: "e6244ee9-3e89-4cd1-b6ba-8776b147a062"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.719398 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-85c84fc4db-nhqwg"] Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.736437 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.749912 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768229 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768747 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-central-agent" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768784 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-central-agent" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768799 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" containerName="heat-api" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768806 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" containerName="heat-api" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768819 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-log" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768827 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-log" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768837 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768844 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768865 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-api" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768872 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-api" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768882 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768888 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768900 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3baf7852-a4e6-4193-a927-c3c652875858" containerName="heat-cfnapi" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768905 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3baf7852-a4e6-4193-a927-c3c652875858" containerName="heat-cfnapi" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768918 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="proxy-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768924 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="proxy-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768937 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="sg-core" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768943 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="sg-core" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768950 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="init" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768956 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="init" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768963 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="dnsmasq-dns" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768968 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="dnsmasq-dns" Mar 12 16:26:22 crc kubenswrapper[4687]: E0312 16:26:22.768979 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-notification-agent" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.768985 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-notification-agent" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769212 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-api" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769223 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-notification-agent" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769235 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" containerName="heat-api" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769244 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" containerName="neutron-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769255 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769264 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3baf7852-a4e6-4193-a927-c3c652875858" containerName="heat-cfnapi" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769272 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" containerName="dnsmasq-dns" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769283 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" containerName="glance-log" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769295 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="ceilometer-central-agent" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769310 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="sg-core" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.769319 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" containerName="proxy-httpd" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.771309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.773761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz59w\" (UniqueName: \"kubernetes.io/projected/05c97834-0ab6-445e-9dd7-61e01484b052-kube-api-access-qz59w\") pod \"05c97834-0ab6-445e-9dd7-61e01484b052\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.773873 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-sb\") pod \"05c97834-0ab6-445e-9dd7-61e01484b052\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.773921 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-config\") pod \"05c97834-0ab6-445e-9dd7-61e01484b052\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.773956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-nb\") pod \"05c97834-0ab6-445e-9dd7-61e01484b052\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.773993 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-swift-storage-0\") pod \"05c97834-0ab6-445e-9dd7-61e01484b052\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.774028 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-combined-ca-bundle\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.774246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-svc\") pod \"05c97834-0ab6-445e-9dd7-61e01484b052\" (UID: \"05c97834-0ab6-445e-9dd7-61e01484b052\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.774282 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-public-tls-certs\") pod \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\" (UID: \"0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1\") " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775162 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775176 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775187 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775196 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9lf5\" (UniqueName: \"kubernetes.io/projected/e6244ee9-3e89-4cd1-b6ba-8776b147a062-kube-api-access-x9lf5\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775218 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") on node \"crc\" " Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775228 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzdkt\" (UniqueName: \"kubernetes.io/projected/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-kube-api-access-wzdkt\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775238 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775247 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.775255 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bp4p\" (UniqueName: \"kubernetes.io/projected/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-kube-api-access-7bp4p\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.778848 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.796324 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.796512 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.804319 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-config-data" (OuterVolumeSpecName: "config-data") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.808821 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.817249 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c97834-0ab6-445e-9dd7-61e01484b052-kube-api-access-qz59w" (OuterVolumeSpecName: "kube-api-access-qz59w") pod "05c97834-0ab6-445e-9dd7-61e01484b052" (UID: "05c97834-0ab6-445e-9dd7-61e01484b052"). InnerVolumeSpecName "kube-api-access-qz59w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.856202 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-config-data\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878398 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878417 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-scripts\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-log-httpd\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9cq\" (UniqueName: \"kubernetes.io/projected/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-kube-api-access-ql9cq\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878592 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-run-httpd\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878667 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878678 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878687 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.878697 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz59w\" (UniqueName: \"kubernetes.io/projected/05c97834-0ab6-445e-9dd7-61e01484b052-kube-api-access-qz59w\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.932452 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.934051 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db") on node "crc" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.941496 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" (UID: "9fed2d6b-94a4-4b88-a539-9b3deffeb5e0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980215 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-run-httpd\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980302 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-config-data\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980435 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-scripts\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-log-httpd\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9cq\" (UniqueName: \"kubernetes.io/projected/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-kube-api-access-ql9cq\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980716 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.980731 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.981523 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-run-httpd\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.992777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-log-httpd\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.994005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:22 crc kubenswrapper[4687]: I0312 16:26:22.997651 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-scripts\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.001369 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9cq\" (UniqueName: \"kubernetes.io/projected/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-kube-api-access-ql9cq\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.002734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.019808 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-config-data\") pod \"ceilometer-0\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " pod="openstack/ceilometer-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.025391 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.075562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-config" (OuterVolumeSpecName: "config") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.080010 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data" (OuterVolumeSpecName: "config-data") pod "e6244ee9-3e89-4cd1-b6ba-8776b147a062" (UID: "e6244ee9-3e89-4cd1-b6ba-8776b147a062"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.080425 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.083167 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.083188 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6244ee9-3e89-4cd1-b6ba-8776b147a062-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.083197 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.112004 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-config" (OuterVolumeSpecName: "config") pod "05c97834-0ab6-445e-9dd7-61e01484b052" (UID: "05c97834-0ab6-445e-9dd7-61e01484b052"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.143873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.151605 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" (UID: "0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.152194 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05c97834-0ab6-445e-9dd7-61e01484b052" (UID: "05c97834-0ab6-445e-9dd7-61e01484b052"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.172147 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05c97834-0ab6-445e-9dd7-61e01484b052" (UID: "05c97834-0ab6-445e-9dd7-61e01484b052"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.183780 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.192544 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.192575 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.192657 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.192677 4687 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.192689 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.197817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05c97834-0ab6-445e-9dd7-61e01484b052" (UID: "05c97834-0ab6-445e-9dd7-61e01484b052"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.209506 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05c97834-0ab6-445e-9dd7-61e01484b052" (UID: "05c97834-0ab6-445e-9dd7-61e01484b052"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.229134 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.247424 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.296453 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-combined-ca-bundle\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.296499 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9nf4\" (UniqueName: \"kubernetes.io/projected/b6d81f6b-475e-4a5b-9fd8-006856dd645d-kube-api-access-l9nf4\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.296525 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-httpd-run\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.296560 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-config-data\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.296596 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-logs\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.296684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-scripts\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.298161 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.300331 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.300423 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-public-tls-certs\") pod \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\" (UID: \"b6d81f6b-475e-4a5b-9fd8-006856dd645d\") " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.301112 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.301123 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.301133 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c97834-0ab6-445e-9dd7-61e01484b052-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.301797 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d81f6b-475e-4a5b-9fd8-006856dd645d-kube-api-access-l9nf4" (OuterVolumeSpecName: "kube-api-access-l9nf4") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "kube-api-access-l9nf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.310067 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-logs" (OuterVolumeSpecName: "logs") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.338534 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-scripts" (OuterVolumeSpecName: "scripts") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.338627 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1bf9-account-create-update-zk5jd"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.352460 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-765f6bdcdf-s57mt"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.363524 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d39f34d0-cde8-47bd-8bfc-929c8cf9de03","Type":"ContainerStarted","Data":"832ca674c4073fd230e95cb9104695371ce4d491cde1a2cf56e985bf5d1884d0"} Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.367602 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f74959f89-ptk6f"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.368622 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9" (OuterVolumeSpecName: "glance") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.394781 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6627h"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.398037 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.614793938 podStartE2EDuration="21.398020675s" podCreationTimestamp="2026-03-12 16:26:02 +0000 UTC" firstStartedPulling="2026-03-12 16:26:04.401078275 +0000 UTC m=+1413.365040619" lastFinishedPulling="2026-03-12 16:26:21.184305012 +0000 UTC m=+1430.148267356" observedRunningTime="2026-03-12 16:26:23.38030608 +0000 UTC m=+1432.344268424" watchObservedRunningTime="2026-03-12 16:26:23.398020675 +0000 UTC m=+1432.361983009" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.402858 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.402891 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") on node \"crc\" " Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.402904 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9nf4\" (UniqueName: \"kubernetes.io/projected/b6d81f6b-475e-4a5b-9fd8-006856dd645d-kube-api-access-l9nf4\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.402914 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6d81f6b-475e-4a5b-9fd8-006856dd645d-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.411591 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6d81f6b-475e-4a5b-9fd8-006856dd645d","Type":"ContainerDied","Data":"b2765b971ec2402eeed4fe3b127c14a30f1d85f5cff205f9e8986caf731fd36b"} Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.411648 4687 scope.go:117] "RemoveContainer" containerID="f7ded872c83d08118a23a01f5b2fa960a3a7411e0e4f4156b7d0a79aaab369ac" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.411785 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.430110 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.457465 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-config-data" (OuterVolumeSpecName: "config-data") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.459136 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" event={"ID":"0bbd130e-9a81-466f-8d89-79c2fa5fdc4c","Type":"ContainerStarted","Data":"ee398d57fcb5fc84f181c1050f8db96a976ac682784adeff9a7fe039e2c77fa4"} Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.460736 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.460763 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.463018 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b6d81f6b-475e-4a5b-9fd8-006856dd645d" (UID: "b6d81f6b-475e-4a5b-9fd8-006856dd645d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.472563 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68cdbd957f-bt7lg" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.476462 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podUID="0bbd130e-9a81-466f-8d89-79c2fa5fdc4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.483114 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.483180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67fdc58659-zfnqd" event={"ID":"c8a3074c-212f-414d-b52b-69c5f0a9071b","Type":"ContainerStarted","Data":"9ba3024609fa09a241830dea05903c13664f8a61264d99c75611fd0d72e370f9"} Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.488254 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55c84f9f84-km8kl" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.508579 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.509217 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9") on node "crc" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.535509 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2msn7" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.548729 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podStartSLOduration=15.548709701 podStartE2EDuration="15.548709701s" podCreationTimestamp="2026-03-12 16:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:23.489707287 +0000 UTC m=+1432.453669631" watchObservedRunningTime="2026-03-12 16:26:23.548709701 +0000 UTC m=+1432.512672035" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.566120 4687 scope.go:117] "RemoveContainer" containerID="8b424bbb68f36f99a42bfe10ee37f0b99bf3b64c1459d6ddb6162a8173d24192" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.582447 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7764695775-jnw2l"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.593760 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.593789 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.593804 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.593812 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d81f6b-475e-4a5b-9fd8-006856dd645d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.604282 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9891-account-create-update-lmtdz"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.617438 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-67fdc58659-zfnqd" podStartSLOduration=8.617420286 podStartE2EDuration="8.617420286s" podCreationTimestamp="2026-03-12 16:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:23.529056064 +0000 UTC m=+1432.493018408" watchObservedRunningTime="2026-03-12 16:26:23.617420286 +0000 UTC m=+1432.581382630" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.668486 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67996fb69c-bczjq"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.687925 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-55c84f9f84-km8kl"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.708949 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-55c84f9f84-km8kl"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.730317 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.767012 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3baf7852-a4e6-4193-a927-c3c652875858" path="/var/lib/kubelet/pods/3baf7852-a4e6-4193-a927-c3c652875858/volumes" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.770701 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b6d42e-354f-4b20-bbce-8083586b8630" path="/var/lib/kubelet/pods/70b6d42e-354f-4b20-bbce-8083586b8630/volumes" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.772101 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6244ee9-3e89-4cd1-b6ba-8776b147a062" path="/var/lib/kubelet/pods/e6244ee9-3e89-4cd1-b6ba-8776b147a062/volumes" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.802403 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.802465 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:26:23 crc kubenswrapper[4687]: E0312 16:26:23.803002 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-log" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.803023 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-log" Mar 12 16:26:23 crc kubenswrapper[4687]: E0312 16:26:23.803071 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-httpd" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.803080 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-httpd" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.803329 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-log" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.803946 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" containerName="glance-httpd" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.815965 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.816001 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68cdbd957f-bt7lg"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.816014 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68cdbd957f-bt7lg"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.816028 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2msn7"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.816040 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2msn7"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.816115 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.820817 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dbdrp"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.828616 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.828864 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gln9t" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.829012 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.829150 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.841231 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6qfnd"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.890433 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-13ba-account-create-update-mjkr5"] Mar 12 16:26:23 crc kubenswrapper[4687]: I0312 16:26:23.941613 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57gjt"] Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014332 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkvh\" (UniqueName: \"kubernetes.io/projected/924199ba-22cc-4b3a-8f1e-8ecf613daac5-kube-api-access-zgkvh\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014485 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924199ba-22cc-4b3a-8f1e-8ecf613daac5-logs\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014515 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/924199ba-22cc-4b3a-8f1e-8ecf613daac5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.014730 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.064110 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117025 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgkvh\" (UniqueName: \"kubernetes.io/projected/924199ba-22cc-4b3a-8f1e-8ecf613daac5-kube-api-access-zgkvh\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117098 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924199ba-22cc-4b3a-8f1e-8ecf613daac5-logs\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/924199ba-22cc-4b3a-8f1e-8ecf613daac5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117247 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117302 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.117413 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.118862 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/924199ba-22cc-4b3a-8f1e-8ecf613daac5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.120238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/924199ba-22cc-4b3a-8f1e-8ecf613daac5-logs\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.123334 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.124581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.130202 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.139961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/924199ba-22cc-4b3a-8f1e-8ecf613daac5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.148170 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgkvh\" (UniqueName: \"kubernetes.io/projected/924199ba-22cc-4b3a-8f1e-8ecf613daac5-kube-api-access-zgkvh\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.162077 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.162132 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac1099220b348776e40356de6ace90133c51ecbf0c7d7d5992bf76d1ea170c4e/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.183219 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podUID="0bbd130e-9a81-466f-8d89-79c2fa5fdc4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.183638 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podUID="0bbd130e-9a81-466f-8d89-79c2fa5fdc4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.222149 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ce69bea-4d2d-41be-9b4a-1ef3655c23db\") pod \"glance-default-internal-api-0\" (UID: \"924199ba-22cc-4b3a-8f1e-8ecf613daac5\") " pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.365983 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.497646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9891-account-create-update-lmtdz" event={"ID":"652d0f48-b79d-4be2-a971-735297c4c3d6","Type":"ContainerStarted","Data":"58926d5ff326e4dea7edc86614230726b74d6dca57c386702bbcac6a381ff286"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.502605 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67fdc58659-zfnqd" event={"ID":"c8a3074c-212f-414d-b52b-69c5f0a9071b","Type":"ContainerStarted","Data":"37094414c653a39bc0bf7364b05192aec41fafa63815641831bae639a4daf91a"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.503702 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.506307 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-765f6bdcdf-s57mt" event={"ID":"481ee492-9331-4038-a70b-2fc4eddfb60f","Type":"ContainerStarted","Data":"521ebf0afa93ca94e3a8b3b2ae58690311d700e4afb0e224d7f5c601e39004f4"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.509531 4687 generic.go:334] "Generic (PLEG): container finished" podID="8959ad5d-c828-4dcf-993f-4225e02fa8ff" containerID="fd4b5e1d581c40097a4911554709176928a0d1bd5434eda4deef8bf5432569c5" exitCode=0 Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.509602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" event={"ID":"8959ad5d-c828-4dcf-993f-4225e02fa8ff","Type":"ContainerDied","Data":"fd4b5e1d581c40097a4911554709176928a0d1bd5434eda4deef8bf5432569c5"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.509636 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" event={"ID":"8959ad5d-c828-4dcf-993f-4225e02fa8ff","Type":"ContainerStarted","Data":"e7481a914ac067589000db8d7357949d2f622670413f5e611a729518e39146a9"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.512431 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerStarted","Data":"1bd10ee2e88dac69ef36259d10fe91f4a31ed83e0f64ade683d9a0a9b54b3787"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.514721 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" event={"ID":"f99b8239-d236-4756-ab84-03b17e79ce5b","Type":"ContainerStarted","Data":"9dc29de531d68242bace6181cad0742388b450bbf9ad0e85191a549b128ecb0e"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.516708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7764695775-jnw2l" event={"ID":"03d64643-db52-410e-a678-980d5030356e","Type":"ContainerStarted","Data":"c4c6f7c0d08e95d4a9842ef6c6b2ae5409f08cbffa7654fea329b4373bfea989"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.526864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67996fb69c-bczjq" event={"ID":"a01a9875-3fca-4c27-81e1-e052629b04ea","Type":"ContainerStarted","Data":"bfa15183eefdb3bf438d23e54b9d7383a7a0953094d2d6c487e32fb3add4a9cc"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.528600 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6627h" event={"ID":"3221bf1a-072f-43e5-add9-4b21a6145692","Type":"ContainerStarted","Data":"5171825249e60cd0f9015a61306f07503bc14f81e8e87a42d0450bca948ecda8"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.528653 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6627h" event={"ID":"3221bf1a-072f-43e5-add9-4b21a6145692","Type":"ContainerStarted","Data":"d8a1129c9b5fa9440c379f4ee9e5e20c2670a1aaf8b9ca56763ff7540b334fde"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.599936 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6qfnd" event={"ID":"53e2c288-c849-482c-97e9-671307534961","Type":"ContainerStarted","Data":"0371b92bb93886a1ac1e1dca25068b59842db5522028a9bc97cda3f9cf3dd7f1"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.612135 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-6627h" podStartSLOduration=6.612111161 podStartE2EDuration="6.612111161s" podCreationTimestamp="2026-03-12 16:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:24.596966795 +0000 UTC m=+1433.560929139" watchObservedRunningTime="2026-03-12 16:26:24.612111161 +0000 UTC m=+1433.576073505" Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.621418 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" event={"ID":"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f","Type":"ContainerStarted","Data":"e5899849a42e8abfeb08c8268ed5664308b1d4933e3f920900703ac7142e195d"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.625210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dbdrp" event={"ID":"9b16131f-3ac7-4721-ae74-8f7c937a3fec","Type":"ContainerStarted","Data":"7f31a26273743c610c7f192a5525181d4512d855c609cfcbb5f5e8d3a44c9229"} Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.625496 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-57gjt" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" containerID="cri-o://9e03df690c2ab0a0255523ecfa32f26fa1ebe41e3863e2dce435b26c4b3e0293" gracePeriod=2 Mar 12 16:26:24 crc kubenswrapper[4687]: I0312 16:26:24.636570 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podUID="0bbd130e-9a81-466f-8d89-79c2fa5fdc4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.136576 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.276522 4687 scope.go:117] "RemoveContainer" containerID="50b0d7479586b3b72e803d123e9197a37f1a4b1b0f4222e56c471a06923b4a65" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.656337 4687 generic.go:334] "Generic (PLEG): container finished" podID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerID="9e03df690c2ab0a0255523ecfa32f26fa1ebe41e3863e2dce435b26c4b3e0293" exitCode=0 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.656796 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57gjt" event={"ID":"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb","Type":"ContainerDied","Data":"9e03df690c2ab0a0255523ecfa32f26fa1ebe41e3863e2dce435b26c4b3e0293"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.656833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57gjt" event={"ID":"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb","Type":"ContainerDied","Data":"a7b89cd7791f05f9a67dc61ea30862727088c27b1fa7d737f82e7f4daf1002ba"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.656849 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b89cd7791f05f9a67dc61ea30862727088c27b1fa7d737f82e7f4daf1002ba" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.658518 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.658769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"924199ba-22cc-4b3a-8f1e-8ecf613daac5","Type":"ContainerStarted","Data":"048489a0bee712c877ae9b9663b94fa4aff95af07b8e59e34b774229742732aa"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.661241 4687 generic.go:334] "Generic (PLEG): container finished" podID="831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" containerID="ddb31459b6232ed9ad823e0e0cd6f522b682c3cf0126c06fbfc7d49a96ea46c6" exitCode=0 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.661313 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" event={"ID":"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f","Type":"ContainerDied","Data":"ddb31459b6232ed9ad823e0e0cd6f522b682c3cf0126c06fbfc7d49a96ea46c6"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.663577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-765f6bdcdf-s57mt" event={"ID":"481ee492-9331-4038-a70b-2fc4eddfb60f","Type":"ContainerStarted","Data":"7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.664650 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.665924 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerStarted","Data":"e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.667204 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" event={"ID":"f99b8239-d236-4756-ab84-03b17e79ce5b","Type":"ContainerStarted","Data":"920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.668009 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.679943 4687 generic.go:334] "Generic (PLEG): container finished" podID="9b16131f-3ac7-4721-ae74-8f7c937a3fec" containerID="9cfa4e468e0d228171adcb1e558c0ea63581a15fd4e03271378cc078cf17ea4b" exitCode=0 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.680019 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dbdrp" event={"ID":"9b16131f-3ac7-4721-ae74-8f7c937a3fec","Type":"ContainerDied","Data":"9cfa4e468e0d228171adcb1e558c0ea63581a15fd4e03271378cc078cf17ea4b"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.681682 4687 generic.go:334] "Generic (PLEG): container finished" podID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerID="677ee5d094692caf3300b9a37ebbd8c3d9e811288f2118fa645eb98f9afddf4b" exitCode=1 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.681726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67996fb69c-bczjq" event={"ID":"a01a9875-3fca-4c27-81e1-e052629b04ea","Type":"ContainerDied","Data":"677ee5d094692caf3300b9a37ebbd8c3d9e811288f2118fa645eb98f9afddf4b"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.684775 4687 generic.go:334] "Generic (PLEG): container finished" podID="3221bf1a-072f-43e5-add9-4b21a6145692" containerID="5171825249e60cd0f9015a61306f07503bc14f81e8e87a42d0450bca948ecda8" exitCode=0 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.684833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6627h" event={"ID":"3221bf1a-072f-43e5-add9-4b21a6145692","Type":"ContainerDied","Data":"5171825249e60cd0f9015a61306f07503bc14f81e8e87a42d0450bca948ecda8"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.686016 4687 generic.go:334] "Generic (PLEG): container finished" podID="53e2c288-c849-482c-97e9-671307534961" containerID="bcd2c8a87e1a0716685610d7746b9d5dd976c22c79d808f674932b8e07c4b8b1" exitCode=0 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.686056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6qfnd" event={"ID":"53e2c288-c849-482c-97e9-671307534961","Type":"ContainerDied","Data":"bcd2c8a87e1a0716685610d7746b9d5dd976c22c79d808f674932b8e07c4b8b1"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.719175 4687 scope.go:117] "RemoveContainer" containerID="677ee5d094692caf3300b9a37ebbd8c3d9e811288f2118fa645eb98f9afddf4b" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.729995 4687 generic.go:334] "Generic (PLEG): container finished" podID="03d64643-db52-410e-a678-980d5030356e" containerID="e553b26f3bf8a73afb1099e6fd6771d14892cdbcfcf47415a2a412d349971d48" exitCode=1 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.730085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7764695775-jnw2l" event={"ID":"03d64643-db52-410e-a678-980d5030356e","Type":"ContainerDied","Data":"e553b26f3bf8a73afb1099e6fd6771d14892cdbcfcf47415a2a412d349971d48"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.734058 4687 scope.go:117] "RemoveContainer" containerID="e553b26f3bf8a73afb1099e6fd6771d14892cdbcfcf47415a2a412d349971d48" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.739758 4687 generic.go:334] "Generic (PLEG): container finished" podID="652d0f48-b79d-4be2-a971-735297c4c3d6" containerID="ef0108e98f9f57ee40f4ecfd3725dcf4476c4e7cc9f2f63277ddeb75ca67011e" exitCode=0 Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.751578 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c97834-0ab6-445e-9dd7-61e01484b052" path="/var/lib/kubelet/pods/05c97834-0ab6-445e-9dd7-61e01484b052/volumes" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.752011 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-765f6bdcdf-s57mt" podStartSLOduration=11.751348047 podStartE2EDuration="11.751348047s" podCreationTimestamp="2026-03-12 16:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:25.744243206 +0000 UTC m=+1434.708205550" watchObservedRunningTime="2026-03-12 16:26:25.751348047 +0000 UTC m=+1434.715310391" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.754436 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1" path="/var/lib/kubelet/pods/0a4eb1a9-fe2f-4ec6-9894-359bb9eed9e1/volumes" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.755129 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fed2d6b-94a4-4b88-a539-9b3deffeb5e0" path="/var/lib/kubelet/pods/9fed2d6b-94a4-4b88-a539-9b3deffeb5e0/volumes" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.771252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9891-account-create-update-lmtdz" event={"ID":"652d0f48-b79d-4be2-a971-735297c4c3d6","Type":"ContainerDied","Data":"ef0108e98f9f57ee40f4ecfd3725dcf4476c4e7cc9f2f63277ddeb75ca67011e"} Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.787828 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" podStartSLOduration=10.787807516 podStartE2EDuration="10.787807516s" podCreationTimestamp="2026-03-12 16:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:25.770981424 +0000 UTC m=+1434.734943778" watchObservedRunningTime="2026-03-12 16:26:25.787807516 +0000 UTC m=+1434.751769870" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.824107 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jl59\" (UniqueName: \"kubernetes.io/projected/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-kube-api-access-8jl59\") pod \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.824419 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-utilities\") pod \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.824462 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-catalog-content\") pod \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\" (UID: \"d51b91d1-2083-4aa4-8e9a-0f80ad1413eb\") " Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.825223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-utilities" (OuterVolumeSpecName: "utilities") pod "d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" (UID: "d51b91d1-2083-4aa4-8e9a-0f80ad1413eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.840646 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-kube-api-access-8jl59" (OuterVolumeSpecName: "kube-api-access-8jl59") pod "d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" (UID: "d51b91d1-2083-4aa4-8e9a-0f80ad1413eb"). InnerVolumeSpecName "kube-api-access-8jl59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.853470 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:25 crc kubenswrapper[4687]: I0312 16:26:25.853503 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jl59\" (UniqueName: \"kubernetes.io/projected/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-kube-api-access-8jl59\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:25 crc kubenswrapper[4687]: E0312 16:26:25.933768 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod652d0f48_b79d_4be2_a971_735297c4c3d6.slice/crio-conmon-ef0108e98f9f57ee40f4ecfd3725dcf4476c4e7cc9f2f63277ddeb75ca67011e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod652d0f48_b79d_4be2_a971_735297c4c3d6.slice/crio-ef0108e98f9f57ee40f4ecfd3725dcf4476c4e7cc9f2f63277ddeb75ca67011e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.101498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" (UID: "d51b91d1-2083-4aa4-8e9a-0f80ad1413eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.163759 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.339938 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.475067 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2xrh\" (UniqueName: \"kubernetes.io/projected/8959ad5d-c828-4dcf-993f-4225e02fa8ff-kube-api-access-p2xrh\") pod \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.475255 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8959ad5d-c828-4dcf-993f-4225e02fa8ff-operator-scripts\") pod \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\" (UID: \"8959ad5d-c828-4dcf-993f-4225e02fa8ff\") " Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.476159 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8959ad5d-c828-4dcf-993f-4225e02fa8ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8959ad5d-c828-4dcf-993f-4225e02fa8ff" (UID: "8959ad5d-c828-4dcf-993f-4225e02fa8ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.479123 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8959ad5d-c828-4dcf-993f-4225e02fa8ff-kube-api-access-p2xrh" (OuterVolumeSpecName: "kube-api-access-p2xrh") pod "8959ad5d-c828-4dcf-993f-4225e02fa8ff" (UID: "8959ad5d-c828-4dcf-993f-4225e02fa8ff"). InnerVolumeSpecName "kube-api-access-p2xrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.577559 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2xrh\" (UniqueName: \"kubernetes.io/projected/8959ad5d-c828-4dcf-993f-4225e02fa8ff-kube-api-access-p2xrh\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.577839 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8959ad5d-c828-4dcf-993f-4225e02fa8ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.752885 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"924199ba-22cc-4b3a-8f1e-8ecf613daac5","Type":"ContainerStarted","Data":"28bb7c974ef34f1bba5289100beb7c90c26a5fc8e819b65cc5111fc5b6ab91a8"} Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.756086 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" event={"ID":"8959ad5d-c828-4dcf-993f-4225e02fa8ff","Type":"ContainerDied","Data":"e7481a914ac067589000db8d7357949d2f622670413f5e611a729518e39146a9"} Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.756117 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7481a914ac067589000db8d7357949d2f622670413f5e611a729518e39146a9" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.756199 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1bf9-account-create-update-zk5jd" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.762559 4687 generic.go:334] "Generic (PLEG): container finished" podID="03d64643-db52-410e-a678-980d5030356e" containerID="f842437dd55895f4976c651b46e1b6f8e49afd78db4b97ca471ba2383170eb8a" exitCode=1 Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.762697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7764695775-jnw2l" event={"ID":"03d64643-db52-410e-a678-980d5030356e","Type":"ContainerDied","Data":"f842437dd55895f4976c651b46e1b6f8e49afd78db4b97ca471ba2383170eb8a"} Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.763089 4687 scope.go:117] "RemoveContainer" containerID="e553b26f3bf8a73afb1099e6fd6771d14892cdbcfcf47415a2a412d349971d48" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.763551 4687 scope.go:117] "RemoveContainer" containerID="f842437dd55895f4976c651b46e1b6f8e49afd78db4b97ca471ba2383170eb8a" Mar 12 16:26:26 crc kubenswrapper[4687]: E0312 16:26:26.763830 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7764695775-jnw2l_openstack(03d64643-db52-410e-a678-980d5030356e)\"" pod="openstack/heat-api-7764695775-jnw2l" podUID="03d64643-db52-410e-a678-980d5030356e" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.773416 4687 generic.go:334] "Generic (PLEG): container finished" podID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerID="f4549430e80ae6379b6e8342e0046eab88c9d2c4a2f805de42d27adfb83a7432" exitCode=1 Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.773527 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67996fb69c-bczjq" event={"ID":"a01a9875-3fca-4c27-81e1-e052629b04ea","Type":"ContainerDied","Data":"f4549430e80ae6379b6e8342e0046eab88c9d2c4a2f805de42d27adfb83a7432"} Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.774401 4687 scope.go:117] "RemoveContainer" containerID="f4549430e80ae6379b6e8342e0046eab88c9d2c4a2f805de42d27adfb83a7432" Mar 12 16:26:26 crc kubenswrapper[4687]: E0312 16:26:26.774713 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-67996fb69c-bczjq_openstack(a01a9875-3fca-4c27-81e1-e052629b04ea)\"" pod="openstack/heat-cfnapi-67996fb69c-bczjq" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.786946 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerStarted","Data":"7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382"} Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.787437 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57gjt" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.930249 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57gjt"] Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.938047 4687 scope.go:117] "RemoveContainer" containerID="677ee5d094692caf3300b9a37ebbd8c3d9e811288f2118fa645eb98f9afddf4b" Mar 12 16:26:26 crc kubenswrapper[4687]: I0312 16:26:26.947130 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-57gjt"] Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.128754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.437602 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.519049 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/652d0f48-b79d-4be2-a971-735297c4c3d6-operator-scripts\") pod \"652d0f48-b79d-4be2-a971-735297c4c3d6\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.519207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4qgk\" (UniqueName: \"kubernetes.io/projected/652d0f48-b79d-4be2-a971-735297c4c3d6-kube-api-access-l4qgk\") pod \"652d0f48-b79d-4be2-a971-735297c4c3d6\" (UID: \"652d0f48-b79d-4be2-a971-735297c4c3d6\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.519997 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652d0f48-b79d-4be2-a971-735297c4c3d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "652d0f48-b79d-4be2-a971-735297c4c3d6" (UID: "652d0f48-b79d-4be2-a971-735297c4c3d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.533787 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652d0f48-b79d-4be2-a971-735297c4c3d6-kube-api-access-l4qgk" (OuterVolumeSpecName: "kube-api-access-l4qgk") pod "652d0f48-b79d-4be2-a971-735297c4c3d6" (UID: "652d0f48-b79d-4be2-a971-735297c4c3d6"). InnerVolumeSpecName "kube-api-access-l4qgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.624675 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/652d0f48-b79d-4be2-a971-735297c4c3d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.624712 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4qgk\" (UniqueName: \"kubernetes.io/projected/652d0f48-b79d-4be2-a971-735297c4c3d6-kube-api-access-l4qgk\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.627191 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.656675 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.661539 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.667010 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.726959 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-operator-scripts\") pod \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727021 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gtz8\" (UniqueName: \"kubernetes.io/projected/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-kube-api-access-7gtz8\") pod \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\" (UID: \"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727054 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3221bf1a-072f-43e5-add9-4b21a6145692-operator-scripts\") pod \"3221bf1a-072f-43e5-add9-4b21a6145692\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rzhq\" (UniqueName: \"kubernetes.io/projected/53e2c288-c849-482c-97e9-671307534961-kube-api-access-2rzhq\") pod \"53e2c288-c849-482c-97e9-671307534961\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727238 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e2c288-c849-482c-97e9-671307534961-operator-scripts\") pod \"53e2c288-c849-482c-97e9-671307534961\" (UID: \"53e2c288-c849-482c-97e9-671307534961\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727290 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldjn4\" (UniqueName: \"kubernetes.io/projected/3221bf1a-072f-43e5-add9-4b21a6145692-kube-api-access-ldjn4\") pod \"3221bf1a-072f-43e5-add9-4b21a6145692\" (UID: \"3221bf1a-072f-43e5-add9-4b21a6145692\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727328 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b16131f-3ac7-4721-ae74-8f7c937a3fec-operator-scripts\") pod \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.727348 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tblxt\" (UniqueName: \"kubernetes.io/projected/9b16131f-3ac7-4721-ae74-8f7c937a3fec-kube-api-access-tblxt\") pod \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\" (UID: \"9b16131f-3ac7-4721-ae74-8f7c937a3fec\") " Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.728874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3221bf1a-072f-43e5-add9-4b21a6145692-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3221bf1a-072f-43e5-add9-4b21a6145692" (UID: "3221bf1a-072f-43e5-add9-4b21a6145692"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.729828 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b16131f-3ac7-4721-ae74-8f7c937a3fec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b16131f-3ac7-4721-ae74-8f7c937a3fec" (UID: "9b16131f-3ac7-4721-ae74-8f7c937a3fec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.730233 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" (UID: "831bdc9a-ebd0-4559-8f76-4f590cd3ea4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.732469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e2c288-c849-482c-97e9-671307534961-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53e2c288-c849-482c-97e9-671307534961" (UID: "53e2c288-c849-482c-97e9-671307534961"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.732975 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b16131f-3ac7-4721-ae74-8f7c937a3fec-kube-api-access-tblxt" (OuterVolumeSpecName: "kube-api-access-tblxt") pod "9b16131f-3ac7-4721-ae74-8f7c937a3fec" (UID: "9b16131f-3ac7-4721-ae74-8f7c937a3fec"). InnerVolumeSpecName "kube-api-access-tblxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.736064 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3221bf1a-072f-43e5-add9-4b21a6145692-kube-api-access-ldjn4" (OuterVolumeSpecName: "kube-api-access-ldjn4") pod "3221bf1a-072f-43e5-add9-4b21a6145692" (UID: "3221bf1a-072f-43e5-add9-4b21a6145692"). InnerVolumeSpecName "kube-api-access-ldjn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.747709 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-kube-api-access-7gtz8" (OuterVolumeSpecName: "kube-api-access-7gtz8") pod "831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" (UID: "831bdc9a-ebd0-4559-8f76-4f590cd3ea4f"). InnerVolumeSpecName "kube-api-access-7gtz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.753698 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e2c288-c849-482c-97e9-671307534961-kube-api-access-2rzhq" (OuterVolumeSpecName: "kube-api-access-2rzhq") pod "53e2c288-c849-482c-97e9-671307534961" (UID: "53e2c288-c849-482c-97e9-671307534961"). InnerVolumeSpecName "kube-api-access-2rzhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.754635 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" path="/var/lib/kubelet/pods/d51b91d1-2083-4aa4-8e9a-0f80ad1413eb/volumes" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.825979 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9891-account-create-update-lmtdz" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.826831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9891-account-create-update-lmtdz" event={"ID":"652d0f48-b79d-4be2-a971-735297c4c3d6","Type":"ContainerDied","Data":"58926d5ff326e4dea7edc86614230726b74d6dca57c386702bbcac6a381ff286"} Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.826903 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58926d5ff326e4dea7edc86614230726b74d6dca57c386702bbcac6a381ff286" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.833326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"924199ba-22cc-4b3a-8f1e-8ecf613daac5","Type":"ContainerStarted","Data":"501bd705ece5adb33921837e1e57b06f1560023c3ba1a1a8f6a13ad7f380867f"} Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.836309 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6627h" event={"ID":"3221bf1a-072f-43e5-add9-4b21a6145692","Type":"ContainerDied","Data":"d8a1129c9b5fa9440c379f4ee9e5e20c2670a1aaf8b9ca56763ff7540b334fde"} Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.836393 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a1129c9b5fa9440c379f4ee9e5e20c2670a1aaf8b9ca56763ff7540b334fde" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.836452 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6627h" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.838982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6qfnd" event={"ID":"53e2c288-c849-482c-97e9-671307534961","Type":"ContainerDied","Data":"0371b92bb93886a1ac1e1dca25068b59842db5522028a9bc97cda3f9cf3dd7f1"} Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.844145 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0371b92bb93886a1ac1e1dca25068b59842db5522028a9bc97cda3f9cf3dd7f1" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.839652 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6qfnd" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.844947 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" event={"ID":"831bdc9a-ebd0-4559-8f76-4f590cd3ea4f","Type":"ContainerDied","Data":"e5899849a42e8abfeb08c8268ed5664308b1d4933e3f920900703ac7142e195d"} Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.844997 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5899849a42e8abfeb08c8268ed5664308b1d4933e3f920900703ac7142e195d" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.845078 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-13ba-account-create-update-mjkr5" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852315 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rzhq\" (UniqueName: \"kubernetes.io/projected/53e2c288-c849-482c-97e9-671307534961-kube-api-access-2rzhq\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852340 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e2c288-c849-482c-97e9-671307534961-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852351 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldjn4\" (UniqueName: \"kubernetes.io/projected/3221bf1a-072f-43e5-add9-4b21a6145692-kube-api-access-ldjn4\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852373 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b16131f-3ac7-4721-ae74-8f7c937a3fec-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852388 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tblxt\" (UniqueName: \"kubernetes.io/projected/9b16131f-3ac7-4721-ae74-8f7c937a3fec-kube-api-access-tblxt\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852399 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852408 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gtz8\" (UniqueName: \"kubernetes.io/projected/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f-kube-api-access-7gtz8\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.852416 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3221bf1a-072f-43e5-add9-4b21a6145692-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.862282 4687 scope.go:117] "RemoveContainer" containerID="f4549430e80ae6379b6e8342e0046eab88c9d2c4a2f805de42d27adfb83a7432" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.870388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerStarted","Data":"58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8"} Mar 12 16:26:27 crc kubenswrapper[4687]: E0312 16:26:27.872307 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-67996fb69c-bczjq_openstack(a01a9875-3fca-4c27-81e1-e052629b04ea)\"" pod="openstack/heat-cfnapi-67996fb69c-bczjq" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.878789 4687 scope.go:117] "RemoveContainer" containerID="f842437dd55895f4976c651b46e1b6f8e49afd78db4b97ca471ba2383170eb8a" Mar 12 16:26:27 crc kubenswrapper[4687]: E0312 16:26:27.879450 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7764695775-jnw2l_openstack(03d64643-db52-410e-a678-980d5030356e)\"" pod="openstack/heat-api-7764695775-jnw2l" podUID="03d64643-db52-410e-a678-980d5030356e" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.884420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dbdrp" event={"ID":"9b16131f-3ac7-4721-ae74-8f7c937a3fec","Type":"ContainerDied","Data":"7f31a26273743c610c7f192a5525181d4512d855c609cfcbb5f5e8d3a44c9229"} Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.884458 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f31a26273743c610c7f192a5525181d4512d855c609cfcbb5f5e8d3a44c9229" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.884665 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dbdrp" Mar 12 16:26:27 crc kubenswrapper[4687]: I0312 16:26:27.894594 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.894570158 podStartE2EDuration="4.894570158s" podCreationTimestamp="2026-03-12 16:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:27.863148664 +0000 UTC m=+1436.827111028" watchObservedRunningTime="2026-03-12 16:26:27.894570158 +0000 UTC m=+1436.858532502" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.195500 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.234520 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.394990 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.673446 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.673497 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.674264 4687 scope.go:117] "RemoveContainer" containerID="f842437dd55895f4976c651b46e1b6f8e49afd78db4b97ca471ba2383170eb8a" Mar 12 16:26:29 crc kubenswrapper[4687]: E0312 16:26:29.674678 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7764695775-jnw2l_openstack(03d64643-db52-410e-a678-980d5030356e)\"" pod="openstack/heat-api-7764695775-jnw2l" podUID="03d64643-db52-410e-a678-980d5030356e" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.699262 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.699338 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:29 crc kubenswrapper[4687]: I0312 16:26:29.700322 4687 scope.go:117] "RemoveContainer" containerID="f4549430e80ae6379b6e8342e0046eab88c9d2c4a2f805de42d27adfb83a7432" Mar 12 16:26:29 crc kubenswrapper[4687]: E0312 16:26:29.700698 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-67996fb69c-bczjq_openstack(a01a9875-3fca-4c27-81e1-e052629b04ea)\"" pod="openstack/heat-cfnapi-67996fb69c-bczjq" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.933408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerStarted","Data":"4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516"} Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.933751 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.933679 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-central-agent" containerID="cri-o://e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658" gracePeriod=30 Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.933836 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="proxy-httpd" containerID="cri-o://4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516" gracePeriod=30 Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.933916 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-notification-agent" containerID="cri-o://7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382" gracePeriod=30 Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.933962 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="sg-core" containerID="cri-o://58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8" gracePeriod=30 Mar 12 16:26:30 crc kubenswrapper[4687]: I0312 16:26:30.971328 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.379954295 podStartE2EDuration="8.971298741s" podCreationTimestamp="2026-03-12 16:26:22 +0000 UTC" firstStartedPulling="2026-03-12 16:26:24.218271227 +0000 UTC m=+1433.182233571" lastFinishedPulling="2026-03-12 16:26:29.809615673 +0000 UTC m=+1438.773578017" observedRunningTime="2026-03-12 16:26:30.951931611 +0000 UTC m=+1439.915893965" watchObservedRunningTime="2026-03-12 16:26:30.971298741 +0000 UTC m=+1439.935261115" Mar 12 16:26:31 crc kubenswrapper[4687]: I0312 16:26:31.950647 4687 generic.go:334] "Generic (PLEG): container finished" podID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerID="4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516" exitCode=0 Mar 12 16:26:31 crc kubenswrapper[4687]: I0312 16:26:31.950921 4687 generic.go:334] "Generic (PLEG): container finished" podID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerID="58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8" exitCode=2 Mar 12 16:26:31 crc kubenswrapper[4687]: I0312 16:26:31.950931 4687 generic.go:334] "Generic (PLEG): container finished" podID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerID="7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382" exitCode=0 Mar 12 16:26:31 crc kubenswrapper[4687]: I0312 16:26:31.950837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerDied","Data":"4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516"} Mar 12 16:26:31 crc kubenswrapper[4687]: I0312 16:26:31.950966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerDied","Data":"58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8"} Mar 12 16:26:31 crc kubenswrapper[4687]: I0312 16:26:31.950980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerDied","Data":"7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382"} Mar 12 16:26:32 crc kubenswrapper[4687]: I0312 16:26:32.766830 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:26:32 crc kubenswrapper[4687]: I0312 16:26:32.845280 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-67996fb69c-bczjq"] Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.135975 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.196981 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7764695775-jnw2l"] Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.308793 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.481097 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data-custom\") pod \"a01a9875-3fca-4c27-81e1-e052629b04ea\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.481285 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data\") pod \"a01a9875-3fca-4c27-81e1-e052629b04ea\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.481476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-combined-ca-bundle\") pod \"a01a9875-3fca-4c27-81e1-e052629b04ea\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.481513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qdh\" (UniqueName: \"kubernetes.io/projected/a01a9875-3fca-4c27-81e1-e052629b04ea-kube-api-access-r8qdh\") pod \"a01a9875-3fca-4c27-81e1-e052629b04ea\" (UID: \"a01a9875-3fca-4c27-81e1-e052629b04ea\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.490981 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a01a9875-3fca-4c27-81e1-e052629b04ea" (UID: "a01a9875-3fca-4c27-81e1-e052629b04ea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.491212 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01a9875-3fca-4c27-81e1-e052629b04ea-kube-api-access-r8qdh" (OuterVolumeSpecName: "kube-api-access-r8qdh") pod "a01a9875-3fca-4c27-81e1-e052629b04ea" (UID: "a01a9875-3fca-4c27-81e1-e052629b04ea"). InnerVolumeSpecName "kube-api-access-r8qdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.537162 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a01a9875-3fca-4c27-81e1-e052629b04ea" (UID: "a01a9875-3fca-4c27-81e1-e052629b04ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.581462 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data" (OuterVolumeSpecName: "config-data") pod "a01a9875-3fca-4c27-81e1-e052629b04ea" (UID: "a01a9875-3fca-4c27-81e1-e052629b04ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.584245 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.584278 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.584291 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01a9875-3fca-4c27-81e1-e052629b04ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.584300 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8qdh\" (UniqueName: \"kubernetes.io/projected/a01a9875-3fca-4c27-81e1-e052629b04ea-kube-api-access-r8qdh\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.675682 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.770234 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m57l8"] Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.771299 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="extract-utilities" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.771447 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="extract-utilities" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.771582 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d64643-db52-410e-a678-980d5030356e" containerName="heat-api" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.771729 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d64643-db52-410e-a678-980d5030356e" containerName="heat-api" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.771843 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="extract-content" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.778465 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="extract-content" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.778630 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8959ad5d-c828-4dcf-993f-4225e02fa8ff" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.778685 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8959ad5d-c828-4dcf-993f-4225e02fa8ff" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.778765 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652d0f48-b79d-4be2-a971-735297c4c3d6" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.778813 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="652d0f48-b79d-4be2-a971-735297c4c3d6" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.778875 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.778922 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.778978 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3221bf1a-072f-43e5-add9-4b21a6145692" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.779024 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3221bf1a-072f-43e5-add9-4b21a6145692" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.779079 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b16131f-3ac7-4721-ae74-8f7c937a3fec" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.779125 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b16131f-3ac7-4721-ae74-8f7c937a3fec" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.779179 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e2c288-c849-482c-97e9-671307534961" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.779226 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e2c288-c849-482c-97e9-671307534961" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.779282 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerName="heat-cfnapi" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.779330 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerName="heat-cfnapi" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.779419 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.779488 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" Mar 12 16:26:33 crc kubenswrapper[4687]: E0312 16:26:33.779658 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerName="heat-cfnapi" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.779715 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerName="heat-cfnapi" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.780165 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerName="heat-cfnapi" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.780760 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.780841 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e2c288-c849-482c-97e9-671307534961" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.780908 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d64643-db52-410e-a678-980d5030356e" containerName="heat-api" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.780962 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d64643-db52-410e-a678-980d5030356e" containerName="heat-api" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.781019 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" containerName="heat-cfnapi" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.781073 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51b91d1-2083-4aa4-8e9a-0f80ad1413eb" containerName="registry-server" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.781129 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8959ad5d-c828-4dcf-993f-4225e02fa8ff" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.781186 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b16131f-3ac7-4721-ae74-8f7c937a3fec" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.781240 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3221bf1a-072f-43e5-add9-4b21a6145692" containerName="mariadb-database-create" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.781312 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="652d0f48-b79d-4be2-a971-735297c4c3d6" containerName="mariadb-account-create-update" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.782221 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.788065 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9q888" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.788296 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.788441 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.789053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data\") pod \"03d64643-db52-410e-a678-980d5030356e\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.789156 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-combined-ca-bundle\") pod \"03d64643-db52-410e-a678-980d5030356e\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.789363 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k5mt\" (UniqueName: \"kubernetes.io/projected/03d64643-db52-410e-a678-980d5030356e-kube-api-access-2k5mt\") pod \"03d64643-db52-410e-a678-980d5030356e\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.789560 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data-custom\") pod \"03d64643-db52-410e-a678-980d5030356e\" (UID: \"03d64643-db52-410e-a678-980d5030356e\") " Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.801685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "03d64643-db52-410e-a678-980d5030356e" (UID: "03d64643-db52-410e-a678-980d5030356e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.802473 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d64643-db52-410e-a678-980d5030356e-kube-api-access-2k5mt" (OuterVolumeSpecName: "kube-api-access-2k5mt") pod "03d64643-db52-410e-a678-980d5030356e" (UID: "03d64643-db52-410e-a678-980d5030356e"). InnerVolumeSpecName "kube-api-access-2k5mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.803663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m57l8"] Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.836141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03d64643-db52-410e-a678-980d5030356e" (UID: "03d64643-db52-410e-a678-980d5030356e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.868199 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data" (OuterVolumeSpecName: "config-data") pod "03d64643-db52-410e-a678-980d5030356e" (UID: "03d64643-db52-410e-a678-980d5030356e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.895020 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.896718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sj8\" (UniqueName: \"kubernetes.io/projected/aa080a8e-b5e2-43f7-8d94-ea07058748b6-kube-api-access-v4sj8\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.897100 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-config-data\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.898868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-scripts\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.899042 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k5mt\" (UniqueName: \"kubernetes.io/projected/03d64643-db52-410e-a678-980d5030356e-kube-api-access-2k5mt\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.899061 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.899072 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.899081 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d64643-db52-410e-a678-980d5030356e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.972879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7764695775-jnw2l" event={"ID":"03d64643-db52-410e-a678-980d5030356e","Type":"ContainerDied","Data":"c4c6f7c0d08e95d4a9842ef6c6b2ae5409f08cbffa7654fea329b4373bfea989"} Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.972934 4687 scope.go:117] "RemoveContainer" containerID="f842437dd55895f4976c651b46e1b6f8e49afd78db4b97ca471ba2383170eb8a" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.973804 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7764695775-jnw2l" Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.976801 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67996fb69c-bczjq" event={"ID":"a01a9875-3fca-4c27-81e1-e052629b04ea","Type":"ContainerDied","Data":"bfa15183eefdb3bf438d23e54b9d7383a7a0953094d2d6c487e32fb3add4a9cc"} Mar 12 16:26:33 crc kubenswrapper[4687]: I0312 16:26:33.976831 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67996fb69c-bczjq" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.001230 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-config-data\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.001459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-scripts\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.001522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.001573 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4sj8\" (UniqueName: \"kubernetes.io/projected/aa080a8e-b5e2-43f7-8d94-ea07058748b6-kube-api-access-v4sj8\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.005479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-config-data\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.005732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-scripts\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.006376 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.015528 4687 scope.go:117] "RemoveContainer" containerID="f4549430e80ae6379b6e8342e0046eab88c9d2c4a2f805de42d27adfb83a7432" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.015667 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-67996fb69c-bczjq"] Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.028468 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4sj8\" (UniqueName: \"kubernetes.io/projected/aa080a8e-b5e2-43f7-8d94-ea07058748b6-kube-api-access-v4sj8\") pod \"nova-cell0-conductor-db-sync-m57l8\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.034734 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-67996fb69c-bczjq"] Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.050147 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7764695775-jnw2l"] Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.061231 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7764695775-jnw2l"] Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.116817 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.366775 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.368099 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.410410 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.422301 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:34 crc kubenswrapper[4687]: W0312 16:26:34.603993 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa080a8e_b5e2_43f7_8d94_ea07058748b6.slice/crio-763e5f3adbab01fec673c7a82e9a76ddbfe736a4067eb3b1d6faf490ea0f05da WatchSource:0}: Error finding container 763e5f3adbab01fec673c7a82e9a76ddbfe736a4067eb3b1d6faf490ea0f05da: Status 404 returned error can't find the container with id 763e5f3adbab01fec673c7a82e9a76ddbfe736a4067eb3b1d6faf490ea0f05da Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.605323 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m57l8"] Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.820511 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.894249 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-66ff7fbdf4-jqph2"] Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.894460 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-66ff7fbdf4-jqph2" podUID="0695d270-8d8d-4148-8782-8d14fb110c88" containerName="heat-engine" containerID="cri-o://152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67" gracePeriod=60 Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.987725 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m57l8" event={"ID":"aa080a8e-b5e2-43f7-8d94-ea07058748b6","Type":"ContainerStarted","Data":"763e5f3adbab01fec673c7a82e9a76ddbfe736a4067eb3b1d6faf490ea0f05da"} Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.991187 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:34 crc kubenswrapper[4687]: I0312 16:26:34.991221 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:35 crc kubenswrapper[4687]: I0312 16:26:35.749915 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d64643-db52-410e-a678-980d5030356e" path="/var/lib/kubelet/pods/03d64643-db52-410e-a678-980d5030356e/volumes" Mar 12 16:26:35 crc kubenswrapper[4687]: I0312 16:26:35.750868 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01a9875-3fca-4c27-81e1-e052629b04ea" path="/var/lib/kubelet/pods/a01a9875-3fca-4c27-81e1-e052629b04ea/volumes" Mar 12 16:26:35 crc kubenswrapper[4687]: I0312 16:26:35.941563 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.009769 4687 generic.go:334] "Generic (PLEG): container finished" podID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerID="e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658" exitCode=0 Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.009863 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerDied","Data":"e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658"} Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.009949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4e40c13-c09a-4b3b-88fa-27a3b402dacd","Type":"ContainerDied","Data":"1bd10ee2e88dac69ef36259d10fe91f4a31ed83e0f64ade683d9a0a9b54b3787"} Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.009967 4687 scope.go:117] "RemoveContainer" containerID="4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.009889 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.044337 4687 scope.go:117] "RemoveContainer" containerID="58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049450 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-run-httpd\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049514 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-scripts\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049573 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql9cq\" (UniqueName: \"kubernetes.io/projected/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-kube-api-access-ql9cq\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049610 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-config-data\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049643 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-log-httpd\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049729 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-combined-ca-bundle\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.049799 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-sg-core-conf-yaml\") pod \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\" (UID: \"c4e40c13-c09a-4b3b-88fa-27a3b402dacd\") " Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.050630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.050644 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.051309 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.051336 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.074907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-kube-api-access-ql9cq" (OuterVolumeSpecName: "kube-api-access-ql9cq") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "kube-api-access-ql9cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.077973 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-scripts" (OuterVolumeSpecName: "scripts") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.092583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.130438 4687 scope.go:117] "RemoveContainer" containerID="7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.148475 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.155024 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.155179 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql9cq\" (UniqueName: \"kubernetes.io/projected/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-kube-api-access-ql9cq\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.155198 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.155265 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.228572 4687 scope.go:117] "RemoveContainer" containerID="e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.232565 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-config-data" (OuterVolumeSpecName: "config-data") pod "c4e40c13-c09a-4b3b-88fa-27a3b402dacd" (UID: "c4e40c13-c09a-4b3b-88fa-27a3b402dacd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.257450 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4e40c13-c09a-4b3b-88fa-27a3b402dacd-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.268613 4687 scope.go:117] "RemoveContainer" containerID="4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.272485 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516\": container with ID starting with 4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516 not found: ID does not exist" containerID="4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.272526 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516"} err="failed to get container status \"4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516\": rpc error: code = NotFound desc = could not find container \"4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516\": container with ID starting with 4e7afea4f999a58757eaf4cd15447407b844176ac3607da10f62a2a6d9f0c516 not found: ID does not exist" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.272554 4687 scope.go:117] "RemoveContainer" containerID="58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.272917 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8\": container with ID starting with 58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8 not found: ID does not exist" containerID="58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.272959 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8"} err="failed to get container status \"58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8\": rpc error: code = NotFound desc = could not find container \"58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8\": container with ID starting with 58608dd2dca94e2b157f590efdf1e4a39411ad3cedda688f0849fe5d6e6c39a8 not found: ID does not exist" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.272987 4687 scope.go:117] "RemoveContainer" containerID="7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.273294 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382\": container with ID starting with 7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382 not found: ID does not exist" containerID="7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.273314 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382"} err="failed to get container status \"7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382\": rpc error: code = NotFound desc = could not find container \"7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382\": container with ID starting with 7e9523ac0b04415dc18aba73e06e1c1398cc064b018d73603aac654f5d9ff382 not found: ID does not exist" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.273341 4687 scope.go:117] "RemoveContainer" containerID="e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.273781 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658\": container with ID starting with e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658 not found: ID does not exist" containerID="e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.273805 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658"} err="failed to get container status \"e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658\": rpc error: code = NotFound desc = could not find container \"e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658\": container with ID starting with e386e863cbce7ddf1e6f8d341917005ae1655123b686c4756b5705a41f663658 not found: ID does not exist" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.344487 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.353652 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.384466 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.385283 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-notification-agent" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.385376 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-notification-agent" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.385448 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-central-agent" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.385497 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-central-agent" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.385574 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="proxy-httpd" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.385625 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="proxy-httpd" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.385685 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="sg-core" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.385732 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="sg-core" Mar 12 16:26:36 crc kubenswrapper[4687]: E0312 16:26:36.385795 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d64643-db52-410e-a678-980d5030356e" containerName="heat-api" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.385842 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d64643-db52-410e-a678-980d5030356e" containerName="heat-api" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.386080 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="proxy-httpd" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.386153 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-central-agent" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.386218 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="sg-core" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.386268 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" containerName="ceilometer-notification-agent" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.388361 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.391265 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.391502 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.399875 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463590 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-log-httpd\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463643 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463694 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb9tw\" (UniqueName: \"kubernetes.io/projected/64ea3493-5328-4b96-8790-9f7fe021de93-kube-api-access-nb9tw\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-scripts\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463843 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-run-httpd\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463875 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.463894 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-config-data\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.565612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-log-httpd\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.565680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.565751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb9tw\" (UniqueName: \"kubernetes.io/projected/64ea3493-5328-4b96-8790-9f7fe021de93-kube-api-access-nb9tw\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.565879 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-scripts\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.565997 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-run-httpd\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.566049 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.566078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-config-data\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.566872 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-log-httpd\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.567309 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-run-httpd\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.572046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.572281 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-scripts\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.573419 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-config-data\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.580243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.595275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb9tw\" (UniqueName: \"kubernetes.io/projected/64ea3493-5328-4b96-8790-9f7fe021de93-kube-api-access-nb9tw\") pod \"ceilometer-0\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " pod="openstack/ceilometer-0" Mar 12 16:26:36 crc kubenswrapper[4687]: I0312 16:26:36.709612 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:37 crc kubenswrapper[4687]: E0312 16:26:37.089531 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:26:37 crc kubenswrapper[4687]: E0312 16:26:37.092389 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:26:37 crc kubenswrapper[4687]: E0312 16:26:37.094665 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:26:37 crc kubenswrapper[4687]: E0312 16:26:37.094704 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-66ff7fbdf4-jqph2" podUID="0695d270-8d8d-4148-8782-8d14fb110c88" containerName="heat-engine" Mar 12 16:26:37 crc kubenswrapper[4687]: I0312 16:26:37.262126 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:37 crc kubenswrapper[4687]: I0312 16:26:37.753790 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e40c13-c09a-4b3b-88fa-27a3b402dacd" path="/var/lib/kubelet/pods/c4e40c13-c09a-4b3b-88fa-27a3b402dacd/volumes" Mar 12 16:26:38 crc kubenswrapper[4687]: I0312 16:26:38.057021 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerStarted","Data":"295072fe28280d763c05cfbb6dfeb577ed17b8cdb1658239b366e70449b879ea"} Mar 12 16:26:38 crc kubenswrapper[4687]: I0312 16:26:38.422779 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:38 crc kubenswrapper[4687]: I0312 16:26:38.423124 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 16:26:38 crc kubenswrapper[4687]: I0312 16:26:38.430162 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 16:26:39 crc kubenswrapper[4687]: I0312 16:26:39.080108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerStarted","Data":"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25"} Mar 12 16:26:39 crc kubenswrapper[4687]: I0312 16:26:39.080782 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerStarted","Data":"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c"} Mar 12 16:26:43 crc kubenswrapper[4687]: I0312 16:26:43.140781 4687 generic.go:334] "Generic (PLEG): container finished" podID="0695d270-8d8d-4148-8782-8d14fb110c88" containerID="152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67" exitCode=0 Mar 12 16:26:43 crc kubenswrapper[4687]: I0312 16:26:43.140858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-66ff7fbdf4-jqph2" event={"ID":"0695d270-8d8d-4148-8782-8d14fb110c88","Type":"ContainerDied","Data":"152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67"} Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.051316 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.094600 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data-custom\") pod \"0695d270-8d8d-4148-8782-8d14fb110c88\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.094806 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5mx8\" (UniqueName: \"kubernetes.io/projected/0695d270-8d8d-4148-8782-8d14fb110c88-kube-api-access-j5mx8\") pod \"0695d270-8d8d-4148-8782-8d14fb110c88\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.094836 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-combined-ca-bundle\") pod \"0695d270-8d8d-4148-8782-8d14fb110c88\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.094971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data\") pod \"0695d270-8d8d-4148-8782-8d14fb110c88\" (UID: \"0695d270-8d8d-4148-8782-8d14fb110c88\") " Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.101459 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0695d270-8d8d-4148-8782-8d14fb110c88-kube-api-access-j5mx8" (OuterVolumeSpecName: "kube-api-access-j5mx8") pod "0695d270-8d8d-4148-8782-8d14fb110c88" (UID: "0695d270-8d8d-4148-8782-8d14fb110c88"). InnerVolumeSpecName "kube-api-access-j5mx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.101702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0695d270-8d8d-4148-8782-8d14fb110c88" (UID: "0695d270-8d8d-4148-8782-8d14fb110c88"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.136197 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0695d270-8d8d-4148-8782-8d14fb110c88" (UID: "0695d270-8d8d-4148-8782-8d14fb110c88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.163203 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data" (OuterVolumeSpecName: "config-data") pod "0695d270-8d8d-4148-8782-8d14fb110c88" (UID: "0695d270-8d8d-4148-8782-8d14fb110c88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.166005 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m57l8" event={"ID":"aa080a8e-b5e2-43f7-8d94-ea07058748b6","Type":"ContainerStarted","Data":"7a8e416b7bebb68d9da01b9f17f5a04b574fb03f44424745e4c1bde126eff216"} Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.168559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-66ff7fbdf4-jqph2" event={"ID":"0695d270-8d8d-4148-8782-8d14fb110c88","Type":"ContainerDied","Data":"eb3e0d6270f393c9d2aff581e200eb7ac1c97f979aff1e4d8d387bcd89ce5563"} Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.168595 4687 scope.go:117] "RemoveContainer" containerID="152215689f700fde8ae3e96b9f6eed16cffaa20bc0c1a8034c46bb3b73557e67" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.168702 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-66ff7fbdf4-jqph2" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.179121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerStarted","Data":"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6"} Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.196857 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m57l8" podStartSLOduration=2.034880949 podStartE2EDuration="12.196839655s" podCreationTimestamp="2026-03-12 16:26:33 +0000 UTC" firstStartedPulling="2026-03-12 16:26:34.60619594 +0000 UTC m=+1443.570158284" lastFinishedPulling="2026-03-12 16:26:44.768154646 +0000 UTC m=+1453.732116990" observedRunningTime="2026-03-12 16:26:45.193588178 +0000 UTC m=+1454.157550532" watchObservedRunningTime="2026-03-12 16:26:45.196839655 +0000 UTC m=+1454.160802009" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.197504 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.197528 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.197559 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5mx8\" (UniqueName: \"kubernetes.io/projected/0695d270-8d8d-4148-8782-8d14fb110c88-kube-api-access-j5mx8\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.197569 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0695d270-8d8d-4148-8782-8d14fb110c88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.219352 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-66ff7fbdf4-jqph2"] Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.228479 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-66ff7fbdf4-jqph2"] Mar 12 16:26:45 crc kubenswrapper[4687]: I0312 16:26:45.747545 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0695d270-8d8d-4148-8782-8d14fb110c88" path="/var/lib/kubelet/pods/0695d270-8d8d-4148-8782-8d14fb110c88/volumes" Mar 12 16:26:48 crc kubenswrapper[4687]: I0312 16:26:48.213750 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerStarted","Data":"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c"} Mar 12 16:26:48 crc kubenswrapper[4687]: I0312 16:26:48.215639 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:26:48 crc kubenswrapper[4687]: I0312 16:26:48.251089 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5432088 podStartE2EDuration="12.251066615s" podCreationTimestamp="2026-03-12 16:26:36 +0000 UTC" firstStartedPulling="2026-03-12 16:26:37.277865948 +0000 UTC m=+1446.241828302" lastFinishedPulling="2026-03-12 16:26:46.985723773 +0000 UTC m=+1455.949686117" observedRunningTime="2026-03-12 16:26:48.24085743 +0000 UTC m=+1457.204819774" watchObservedRunningTime="2026-03-12 16:26:48.251066615 +0000 UTC m=+1457.215028979" Mar 12 16:26:51 crc kubenswrapper[4687]: I0312 16:26:51.246691 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:51 crc kubenswrapper[4687]: I0312 16:26:51.247549 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-central-agent" containerID="cri-o://a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" gracePeriod=30 Mar 12 16:26:51 crc kubenswrapper[4687]: I0312 16:26:51.247716 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="proxy-httpd" containerID="cri-o://e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" gracePeriod=30 Mar 12 16:26:51 crc kubenswrapper[4687]: I0312 16:26:51.247822 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="sg-core" containerID="cri-o://86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" gracePeriod=30 Mar 12 16:26:51 crc kubenswrapper[4687]: I0312 16:26:51.247898 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-notification-agent" containerID="cri-o://6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" gracePeriod=30 Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.162459 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.258945 4687 generic.go:334] "Generic (PLEG): container finished" podID="64ea3493-5328-4b96-8790-9f7fe021de93" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" exitCode=0 Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.258974 4687 generic.go:334] "Generic (PLEG): container finished" podID="64ea3493-5328-4b96-8790-9f7fe021de93" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" exitCode=2 Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.258981 4687 generic.go:334] "Generic (PLEG): container finished" podID="64ea3493-5328-4b96-8790-9f7fe021de93" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" exitCode=0 Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.258989 4687 generic.go:334] "Generic (PLEG): container finished" podID="64ea3493-5328-4b96-8790-9f7fe021de93" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" exitCode=0 Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259003 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerDied","Data":"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c"} Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259111 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerDied","Data":"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6"} Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259123 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerDied","Data":"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25"} Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259131 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerDied","Data":"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c"} Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259139 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64ea3493-5328-4b96-8790-9f7fe021de93","Type":"ContainerDied","Data":"295072fe28280d763c05cfbb6dfeb577ed17b8cdb1658239b366e70449b879ea"} Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.259153 4687 scope.go:117] "RemoveContainer" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.268838 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-config-data\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.268935 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-sg-core-conf-yaml\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.269005 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-scripts\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.269043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-log-httpd\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.269069 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-run-httpd\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.269114 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb9tw\" (UniqueName: \"kubernetes.io/projected/64ea3493-5328-4b96-8790-9f7fe021de93-kube-api-access-nb9tw\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.269203 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-combined-ca-bundle\") pod \"64ea3493-5328-4b96-8790-9f7fe021de93\" (UID: \"64ea3493-5328-4b96-8790-9f7fe021de93\") " Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.270054 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.274394 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ea3493-5328-4b96-8790-9f7fe021de93-kube-api-access-nb9tw" (OuterVolumeSpecName: "kube-api-access-nb9tw") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "kube-api-access-nb9tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.274524 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.274601 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-scripts" (OuterVolumeSpecName: "scripts") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.284699 4687 scope.go:117] "RemoveContainer" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.310327 4687 scope.go:117] "RemoveContainer" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.310893 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.334160 4687 scope.go:117] "RemoveContainer" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.355056 4687 scope.go:117] "RemoveContainer" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.355547 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": container with ID starting with e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c not found: ID does not exist" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.355562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.355613 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c"} err="failed to get container status \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": rpc error: code = NotFound desc = could not find container \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": container with ID starting with e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.355640 4687 scope.go:117] "RemoveContainer" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.356021 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": container with ID starting with 86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6 not found: ID does not exist" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.356051 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6"} err="failed to get container status \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": rpc error: code = NotFound desc = could not find container \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": container with ID starting with 86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.356073 4687 scope.go:117] "RemoveContainer" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.356339 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": container with ID starting with 6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25 not found: ID does not exist" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.356415 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25"} err="failed to get container status \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": rpc error: code = NotFound desc = could not find container \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": container with ID starting with 6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.356447 4687 scope.go:117] "RemoveContainer" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.356745 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": container with ID starting with a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c not found: ID does not exist" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.356772 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c"} err="failed to get container status \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": rpc error: code = NotFound desc = could not find container \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": container with ID starting with a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.356785 4687 scope.go:117] "RemoveContainer" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.357168 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c"} err="failed to get container status \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": rpc error: code = NotFound desc = could not find container \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": container with ID starting with e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.357192 4687 scope.go:117] "RemoveContainer" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.357430 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6"} err="failed to get container status \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": rpc error: code = NotFound desc = could not find container \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": container with ID starting with 86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.357452 4687 scope.go:117] "RemoveContainer" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.357688 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25"} err="failed to get container status \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": rpc error: code = NotFound desc = could not find container \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": container with ID starting with 6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.357709 4687 scope.go:117] "RemoveContainer" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.358014 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c"} err="failed to get container status \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": rpc error: code = NotFound desc = could not find container \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": container with ID starting with a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.358035 4687 scope.go:117] "RemoveContainer" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.358260 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c"} err="failed to get container status \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": rpc error: code = NotFound desc = could not find container \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": container with ID starting with e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.358291 4687 scope.go:117] "RemoveContainer" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.358628 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6"} err="failed to get container status \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": rpc error: code = NotFound desc = could not find container \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": container with ID starting with 86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.358666 4687 scope.go:117] "RemoveContainer" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359012 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25"} err="failed to get container status \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": rpc error: code = NotFound desc = could not find container \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": container with ID starting with 6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359033 4687 scope.go:117] "RemoveContainer" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359368 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c"} err="failed to get container status \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": rpc error: code = NotFound desc = could not find container \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": container with ID starting with a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359387 4687 scope.go:117] "RemoveContainer" containerID="e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359576 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c"} err="failed to get container status \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": rpc error: code = NotFound desc = could not find container \"e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c\": container with ID starting with e0a6ba7957b9b6d577f6869f51133b7ab18902727ebfd795d8f6574d5f3f853c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359596 4687 scope.go:117] "RemoveContainer" containerID="86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359811 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6"} err="failed to get container status \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": rpc error: code = NotFound desc = could not find container \"86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6\": container with ID starting with 86a0788d97a8bb0c93d250a1b87a4f5911f741e4f8c90f4a1fcf08e15c9fb2b6 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.359829 4687 scope.go:117] "RemoveContainer" containerID="6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.360071 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25"} err="failed to get container status \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": rpc error: code = NotFound desc = could not find container \"6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25\": container with ID starting with 6fa1a3caf34199a7f303f48a74a2150e6e81bffa6c9702cba10b918359deda25 not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.360099 4687 scope.go:117] "RemoveContainer" containerID="a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.360325 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c"} err="failed to get container status \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": rpc error: code = NotFound desc = could not find container \"a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c\": container with ID starting with a0772742c7aee3579c1cb6024b770a56c1df5e4ad3068cc1047485d37f7b004c not found: ID does not exist" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.372120 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.372144 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.372154 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64ea3493-5328-4b96-8790-9f7fe021de93-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.372164 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb9tw\" (UniqueName: \"kubernetes.io/projected/64ea3493-5328-4b96-8790-9f7fe021de93-kube-api-access-nb9tw\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.372175 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.372183 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.408533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-config-data" (OuterVolumeSpecName: "config-data") pod "64ea3493-5328-4b96-8790-9f7fe021de93" (UID: "64ea3493-5328-4b96-8790-9f7fe021de93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.473681 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64ea3493-5328-4b96-8790-9f7fe021de93-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.598447 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.609806 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.643819 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.644656 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="sg-core" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.644763 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="sg-core" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.644835 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="proxy-httpd" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.644900 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="proxy-httpd" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.644993 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-notification-agent" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.645096 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-notification-agent" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.645181 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-central-agent" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.645272 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-central-agent" Mar 12 16:26:52 crc kubenswrapper[4687]: E0312 16:26:52.645491 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0695d270-8d8d-4148-8782-8d14fb110c88" containerName="heat-engine" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.645556 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0695d270-8d8d-4148-8782-8d14fb110c88" containerName="heat-engine" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.645823 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-notification-agent" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.645909 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="proxy-httpd" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.645990 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0695d270-8d8d-4148-8782-8d14fb110c88" containerName="heat-engine" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.646211 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="ceilometer-central-agent" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.646285 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" containerName="sg-core" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.648346 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.652260 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.655268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.657881 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.779262 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-run-httpd\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.779410 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.779452 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-config-data\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.779607 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.779807 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-scripts\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.779962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9vlm\" (UniqueName: \"kubernetes.io/projected/3f83cfb5-7cf9-4547-815d-995b920102ab-kube-api-access-c9vlm\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.780003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-log-httpd\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.882678 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.883078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-scripts\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.883240 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9vlm\" (UniqueName: \"kubernetes.io/projected/3f83cfb5-7cf9-4547-815d-995b920102ab-kube-api-access-c9vlm\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.883382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-log-httpd\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.883609 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-run-httpd\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.883799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.883930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-log-httpd\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.884023 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-run-httpd\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.884136 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-config-data\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.887329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.887977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-scripts\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.892708 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-config-data\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.899215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.905780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9vlm\" (UniqueName: \"kubernetes.io/projected/3f83cfb5-7cf9-4547-815d-995b920102ab-kube-api-access-c9vlm\") pod \"ceilometer-0\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " pod="openstack/ceilometer-0" Mar 12 16:26:52 crc kubenswrapper[4687]: I0312 16:26:52.974072 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:26:53 crc kubenswrapper[4687]: I0312 16:26:53.436999 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:53 crc kubenswrapper[4687]: I0312 16:26:53.749174 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ea3493-5328-4b96-8790-9f7fe021de93" path="/var/lib/kubelet/pods/64ea3493-5328-4b96-8790-9f7fe021de93/volumes" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.093953 4687 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb6d81f6b-475e-4a5b-9fd8-006856dd645d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb6d81f6b-475e-4a5b-9fd8-006856dd645d] : Timed out while waiting for systemd to remove kubepods-besteffort-podb6d81f6b_475e_4a5b_9fd8_006856dd645d.slice" Mar 12 16:26:54 crc kubenswrapper[4687]: E0312 16:26:54.094230 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb6d81f6b-475e-4a5b-9fd8-006856dd645d] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb6d81f6b-475e-4a5b-9fd8-006856dd645d] : Timed out while waiting for systemd to remove kubepods-besteffort-podb6d81f6b_475e_4a5b_9fd8_006856dd645d.slice" pod="openstack/glance-default-external-api-0" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.283176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerStarted","Data":"e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8"} Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.283242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerStarted","Data":"4d4b1a1ad5951dd54d2ddd42d9d21bfa17ce96759b27bde41872e325f521edeb"} Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.283203 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.330423 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.347737 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.379165 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.388200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.391534 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.394832 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.395972 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.540218 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7ln4\" (UniqueName: \"kubernetes.io/projected/6691ec24-3499-48c7-85f2-1f4ea3327d55-kube-api-access-b7ln4\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.540449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6691ec24-3499-48c7-85f2-1f4ea3327d55-logs\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.540575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-scripts\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.540669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.541019 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6691ec24-3499-48c7-85f2-1f4ea3327d55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.541214 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.541242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-config-data\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.541382 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.643728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7ln4\" (UniqueName: \"kubernetes.io/projected/6691ec24-3499-48c7-85f2-1f4ea3327d55-kube-api-access-b7ln4\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.643828 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6691ec24-3499-48c7-85f2-1f4ea3327d55-logs\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.643874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-scripts\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.643913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.644008 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6691ec24-3499-48c7-85f2-1f4ea3327d55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.644071 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.644094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-config-data\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.644160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.645618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6691ec24-3499-48c7-85f2-1f4ea3327d55-logs\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.645839 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6691ec24-3499-48c7-85f2-1f4ea3327d55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.657281 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.665624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-config-data\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.671243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-scripts\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.671913 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6691ec24-3499-48c7-85f2-1f4ea3327d55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.681165 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7ln4\" (UniqueName: \"kubernetes.io/projected/6691ec24-3499-48c7-85f2-1f4ea3327d55-kube-api-access-b7ln4\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.713286 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.713328 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b302cf2d10698cf3ec9c6d22e773973781febffa91dff6b5e0b4b9a361287f8c/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 12 16:26:54 crc kubenswrapper[4687]: I0312 16:26:54.937866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e2c036d8-1eb8-4b75-8681-595106dcacc9\") pod \"glance-default-external-api-0\" (UID: \"6691ec24-3499-48c7-85f2-1f4ea3327d55\") " pod="openstack/glance-default-external-api-0" Mar 12 16:26:55 crc kubenswrapper[4687]: I0312 16:26:55.044155 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 16:26:55 crc kubenswrapper[4687]: I0312 16:26:55.300965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerStarted","Data":"6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372"} Mar 12 16:26:55 crc kubenswrapper[4687]: I0312 16:26:55.661014 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 16:26:55 crc kubenswrapper[4687]: I0312 16:26:55.760786 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d81f6b-475e-4a5b-9fd8-006856dd645d" path="/var/lib/kubelet/pods/b6d81f6b-475e-4a5b-9fd8-006856dd645d/volumes" Mar 12 16:26:56 crc kubenswrapper[4687]: I0312 16:26:56.319864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6691ec24-3499-48c7-85f2-1f4ea3327d55","Type":"ContainerStarted","Data":"a777ac0738c3acebed364b7ce97145bbd198ca94fb7f1e12ce5534b622fe72b0"} Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.335764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerStarted","Data":"5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258"} Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.340023 4687 generic.go:334] "Generic (PLEG): container finished" podID="aa080a8e-b5e2-43f7-8d94-ea07058748b6" containerID="7a8e416b7bebb68d9da01b9f17f5a04b574fb03f44424745e4c1bde126eff216" exitCode=0 Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.340085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m57l8" event={"ID":"aa080a8e-b5e2-43f7-8d94-ea07058748b6","Type":"ContainerDied","Data":"7a8e416b7bebb68d9da01b9f17f5a04b574fb03f44424745e4c1bde126eff216"} Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.345389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6691ec24-3499-48c7-85f2-1f4ea3327d55","Type":"ContainerStarted","Data":"c0e74b2e054aecebcd454b3c2a48c1852b1f2b8f565327f0c0551b92afc82f89"} Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.345419 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6691ec24-3499-48c7-85f2-1f4ea3327d55","Type":"ContainerStarted","Data":"49207bcbb0cc69589f4225ec85d6cd1f2a8fbdb92ba2e74c84ac3d2a300435ce"} Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.378183 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.378165567 podStartE2EDuration="3.378165567s" podCreationTimestamp="2026-03-12 16:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:26:57.370700627 +0000 UTC m=+1466.334662981" watchObservedRunningTime="2026-03-12 16:26:57.378165567 +0000 UTC m=+1466.342127911" Mar 12 16:26:57 crc kubenswrapper[4687]: I0312 16:26:57.560235 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:26:58 crc kubenswrapper[4687]: I0312 16:26:58.941832 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.068597 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4sj8\" (UniqueName: \"kubernetes.io/projected/aa080a8e-b5e2-43f7-8d94-ea07058748b6-kube-api-access-v4sj8\") pod \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.069036 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-combined-ca-bundle\") pod \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.069149 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-scripts\") pod \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.069181 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-config-data\") pod \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\" (UID: \"aa080a8e-b5e2-43f7-8d94-ea07058748b6\") " Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.076033 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-scripts" (OuterVolumeSpecName: "scripts") pod "aa080a8e-b5e2-43f7-8d94-ea07058748b6" (UID: "aa080a8e-b5e2-43f7-8d94-ea07058748b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.077415 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa080a8e-b5e2-43f7-8d94-ea07058748b6-kube-api-access-v4sj8" (OuterVolumeSpecName: "kube-api-access-v4sj8") pod "aa080a8e-b5e2-43f7-8d94-ea07058748b6" (UID: "aa080a8e-b5e2-43f7-8d94-ea07058748b6"). InnerVolumeSpecName "kube-api-access-v4sj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.124597 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-config-data" (OuterVolumeSpecName: "config-data") pod "aa080a8e-b5e2-43f7-8d94-ea07058748b6" (UID: "aa080a8e-b5e2-43f7-8d94-ea07058748b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.137265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa080a8e-b5e2-43f7-8d94-ea07058748b6" (UID: "aa080a8e-b5e2-43f7-8d94-ea07058748b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.171629 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4sj8\" (UniqueName: \"kubernetes.io/projected/aa080a8e-b5e2-43f7-8d94-ea07058748b6-kube-api-access-v4sj8\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.171665 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.171677 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.171684 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa080a8e-b5e2-43f7-8d94-ea07058748b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.367807 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerStarted","Data":"940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603"} Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.367917 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.367906 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-central-agent" containerID="cri-o://e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8" gracePeriod=30 Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.368127 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="sg-core" containerID="cri-o://5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258" gracePeriod=30 Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.368213 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="proxy-httpd" containerID="cri-o://940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603" gracePeriod=30 Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.368303 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-notification-agent" containerID="cri-o://6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372" gracePeriod=30 Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.371689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m57l8" event={"ID":"aa080a8e-b5e2-43f7-8d94-ea07058748b6","Type":"ContainerDied","Data":"763e5f3adbab01fec673c7a82e9a76ddbfe736a4067eb3b1d6faf490ea0f05da"} Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.371719 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763e5f3adbab01fec673c7a82e9a76ddbfe736a4067eb3b1d6faf490ea0f05da" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.371773 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m57l8" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.412267 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.08114296 podStartE2EDuration="7.412248528s" podCreationTimestamp="2026-03-12 16:26:52 +0000 UTC" firstStartedPulling="2026-03-12 16:26:53.441088516 +0000 UTC m=+1462.405050860" lastFinishedPulling="2026-03-12 16:26:58.772194084 +0000 UTC m=+1467.736156428" observedRunningTime="2026-03-12 16:26:59.406610667 +0000 UTC m=+1468.370573021" watchObservedRunningTime="2026-03-12 16:26:59.412248528 +0000 UTC m=+1468.376210872" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.481724 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 16:26:59 crc kubenswrapper[4687]: E0312 16:26:59.482439 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa080a8e-b5e2-43f7-8d94-ea07058748b6" containerName="nova-cell0-conductor-db-sync" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.482456 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa080a8e-b5e2-43f7-8d94-ea07058748b6" containerName="nova-cell0-conductor-db-sync" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.482673 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa080a8e-b5e2-43f7-8d94-ea07058748b6" containerName="nova-cell0-conductor-db-sync" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.483490 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.486597 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.486734 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9q888" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.493214 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.580966 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20389fa7-46fb-42a6-a35c-4051648e70ea-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.581155 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2mt\" (UniqueName: \"kubernetes.io/projected/20389fa7-46fb-42a6-a35c-4051648e70ea-kube-api-access-2z2mt\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.581200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20389fa7-46fb-42a6-a35c-4051648e70ea-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.683299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20389fa7-46fb-42a6-a35c-4051648e70ea-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.683678 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z2mt\" (UniqueName: \"kubernetes.io/projected/20389fa7-46fb-42a6-a35c-4051648e70ea-kube-api-access-2z2mt\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.683723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20389fa7-46fb-42a6-a35c-4051648e70ea-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.688294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20389fa7-46fb-42a6-a35c-4051648e70ea-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.690101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20389fa7-46fb-42a6-a35c-4051648e70ea-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.701338 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z2mt\" (UniqueName: \"kubernetes.io/projected/20389fa7-46fb-42a6-a35c-4051648e70ea-kube-api-access-2z2mt\") pod \"nova-cell0-conductor-0\" (UID: \"20389fa7-46fb-42a6-a35c-4051648e70ea\") " pod="openstack/nova-cell0-conductor-0" Mar 12 16:26:59 crc kubenswrapper[4687]: I0312 16:26:59.808562 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.305009 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.400312 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerID="940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603" exitCode=0 Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.400609 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerID="5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258" exitCode=2 Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.400618 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerID="6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372" exitCode=0 Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.400654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerDied","Data":"940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603"} Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.400680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerDied","Data":"5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258"} Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.400690 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerDied","Data":"6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372"} Mar 12 16:27:00 crc kubenswrapper[4687]: I0312 16:27:00.404542 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20389fa7-46fb-42a6-a35c-4051648e70ea","Type":"ContainerStarted","Data":"99258893eb29657701145b1e1af4f0a37963ca6377231870afe5cf25c545473c"} Mar 12 16:27:01 crc kubenswrapper[4687]: I0312 16:27:01.416661 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20389fa7-46fb-42a6-a35c-4051648e70ea","Type":"ContainerStarted","Data":"3239e5fc9dd0cd49206f032d63caf293b66212c388d46b48529d8fdc162b6c83"} Mar 12 16:27:01 crc kubenswrapper[4687]: I0312 16:27:01.416926 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 16:27:01 crc kubenswrapper[4687]: I0312 16:27:01.442303 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.44227968 podStartE2EDuration="2.44227968s" podCreationTimestamp="2026-03-12 16:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:01.430766531 +0000 UTC m=+1470.394728885" watchObservedRunningTime="2026-03-12 16:27:01.44227968 +0000 UTC m=+1470.406242024" Mar 12 16:27:05 crc kubenswrapper[4687]: I0312 16:27:05.044934 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 16:27:05 crc kubenswrapper[4687]: I0312 16:27:05.046661 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 16:27:05 crc kubenswrapper[4687]: I0312 16:27:05.090675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 16:27:05 crc kubenswrapper[4687]: I0312 16:27:05.092303 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 16:27:05 crc kubenswrapper[4687]: I0312 16:27:05.457332 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 16:27:05 crc kubenswrapper[4687]: I0312 16:27:05.457427 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.136627 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239112 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-config-data\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239172 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-combined-ca-bundle\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239193 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9vlm\" (UniqueName: \"kubernetes.io/projected/3f83cfb5-7cf9-4547-815d-995b920102ab-kube-api-access-c9vlm\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239218 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-log-httpd\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-scripts\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239459 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-sg-core-conf-yaml\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.239503 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-run-httpd\") pod \"3f83cfb5-7cf9-4547-815d-995b920102ab\" (UID: \"3f83cfb5-7cf9-4547-815d-995b920102ab\") " Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.240098 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.240338 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.248412 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f83cfb5-7cf9-4547-815d-995b920102ab-kube-api-access-c9vlm" (OuterVolumeSpecName: "kube-api-access-c9vlm") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "kube-api-access-c9vlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.248656 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-scripts" (OuterVolumeSpecName: "scripts") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.280049 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.344795 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9vlm\" (UniqueName: \"kubernetes.io/projected/3f83cfb5-7cf9-4547-815d-995b920102ab-kube-api-access-c9vlm\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.344825 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.344834 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.344842 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.344850 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83cfb5-7cf9-4547-815d-995b920102ab-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.379401 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-config-data" (OuterVolumeSpecName: "config-data") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.440625 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f83cfb5-7cf9-4547-815d-995b920102ab" (UID: "3f83cfb5-7cf9-4547-815d-995b920102ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.447741 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.447784 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83cfb5-7cf9-4547-815d-995b920102ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.471781 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerID="e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8" exitCode=0 Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.471826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerDied","Data":"e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8"} Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.471896 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.471949 4687 scope.go:117] "RemoveContainer" containerID="940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.471890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83cfb5-7cf9-4547-815d-995b920102ab","Type":"ContainerDied","Data":"4d4b1a1ad5951dd54d2ddd42d9d21bfa17ce96759b27bde41872e325f521edeb"} Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.530256 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.547475 4687 scope.go:117] "RemoveContainer" containerID="5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.607814 4687 scope.go:117] "RemoveContainer" containerID="6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.625841 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.638239 4687 scope.go:117] "RemoveContainer" containerID="e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.652176 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.652670 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="proxy-httpd" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.652697 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="proxy-httpd" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.652723 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-notification-agent" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.652730 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-notification-agent" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.652742 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-central-agent" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.652749 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-central-agent" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.652782 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="sg-core" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.652788 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="sg-core" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.653011 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="sg-core" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.653032 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="proxy-httpd" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.653049 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-notification-agent" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.653063 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" containerName="ceilometer-central-agent" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.656134 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.660661 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.660708 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.670549 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.679561 4687 scope.go:117] "RemoveContainer" containerID="940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.680109 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603\": container with ID starting with 940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603 not found: ID does not exist" containerID="940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.680151 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603"} err="failed to get container status \"940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603\": rpc error: code = NotFound desc = could not find container \"940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603\": container with ID starting with 940718972e96bc767ff6fe071f6088d670015866c8f5aa695cdd9bfb529bd603 not found: ID does not exist" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.680177 4687 scope.go:117] "RemoveContainer" containerID="5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.680666 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258\": container with ID starting with 5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258 not found: ID does not exist" containerID="5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.680731 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258"} err="failed to get container status \"5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258\": rpc error: code = NotFound desc = could not find container \"5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258\": container with ID starting with 5caa3675b2fde69471248209e18abb381b3d5208eab33ecfe03b4340f76f9258 not found: ID does not exist" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.680744 4687 scope.go:117] "RemoveContainer" containerID="6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.681130 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372\": container with ID starting with 6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372 not found: ID does not exist" containerID="6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.681154 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372"} err="failed to get container status \"6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372\": rpc error: code = NotFound desc = could not find container \"6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372\": container with ID starting with 6e504c806f1d99a13f3192c3953abb0f2538b61ea2884ee4c2f061202503e372 not found: ID does not exist" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.681167 4687 scope.go:117] "RemoveContainer" containerID="e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8" Mar 12 16:27:06 crc kubenswrapper[4687]: E0312 16:27:06.681420 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8\": container with ID starting with e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8 not found: ID does not exist" containerID="e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.681443 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8"} err="failed to get container status \"e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8\": rpc error: code = NotFound desc = could not find container \"e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8\": container with ID starting with e7431a2cbf90d7aa67d623f291dc618f5dc81d46dcbd9621e8341aa28dcaaed8 not found: ID does not exist" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.755766 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.755861 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-config-data\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.755895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nbg\" (UniqueName: \"kubernetes.io/projected/423af688-f49e-42a8-b2d2-623cd01ac948-kube-api-access-b6nbg\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.755967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-run-httpd\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.756044 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.756100 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-scripts\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.756152 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-log-httpd\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858419 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-log-httpd\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858666 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-config-data\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nbg\" (UniqueName: \"kubernetes.io/projected/423af688-f49e-42a8-b2d2-623cd01ac948-kube-api-access-b6nbg\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-run-httpd\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858935 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-scripts\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.858996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-log-httpd\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.860069 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-run-httpd\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.864044 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-scripts\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.864397 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.864571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.868325 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-config-data\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:06 crc kubenswrapper[4687]: I0312 16:27:06.886923 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nbg\" (UniqueName: \"kubernetes.io/projected/423af688-f49e-42a8-b2d2-623cd01ac948-kube-api-access-b6nbg\") pod \"ceilometer-0\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " pod="openstack/ceilometer-0" Mar 12 16:27:07 crc kubenswrapper[4687]: I0312 16:27:07.042473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:07 crc kubenswrapper[4687]: I0312 16:27:07.563303 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:07 crc kubenswrapper[4687]: I0312 16:27:07.751910 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f83cfb5-7cf9-4547-815d-995b920102ab" path="/var/lib/kubelet/pods/3f83cfb5-7cf9-4547-815d-995b920102ab/volumes" Mar 12 16:27:08 crc kubenswrapper[4687]: I0312 16:27:08.154151 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 16:27:08 crc kubenswrapper[4687]: I0312 16:27:08.154514 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 16:27:08 crc kubenswrapper[4687]: I0312 16:27:08.231886 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 16:27:08 crc kubenswrapper[4687]: I0312 16:27:08.495766 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerStarted","Data":"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e"} Mar 12 16:27:08 crc kubenswrapper[4687]: I0312 16:27:08.495826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerStarted","Data":"a9a549c5a53e7bf20aa6668ff704a1cf31d188e8ee6ea16fd156c0a2b9f31342"} Mar 12 16:27:09 crc kubenswrapper[4687]: I0312 16:27:09.508508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerStarted","Data":"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22"} Mar 12 16:27:09 crc kubenswrapper[4687]: I0312 16:27:09.853352 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.437150 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-r62f9"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.438987 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.442330 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.442641 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.448929 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-r62f9"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.531265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerStarted","Data":"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4"} Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.552864 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thj4\" (UniqueName: \"kubernetes.io/projected/07b5d6f6-b886-4992-a758-0f4c72a872bf-kube-api-access-4thj4\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.552976 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-scripts\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.553130 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-config-data\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.553192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.596415 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.598660 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.602614 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.630842 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.656858 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-scripts\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.656983 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.657009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2h8\" (UniqueName: \"kubernetes.io/projected/aaa7465a-eef7-4b1d-951f-153168f06d07-kube-api-access-sq2h8\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.657049 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa7465a-eef7-4b1d-951f-153168f06d07-logs\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.657103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-config-data\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.657122 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-config-data\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.657162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.657215 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thj4\" (UniqueName: \"kubernetes.io/projected/07b5d6f6-b886-4992-a758-0f4c72a872bf-kube-api-access-4thj4\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.669642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.681812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-scripts\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.697021 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-config-data\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.735676 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.740741 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thj4\" (UniqueName: \"kubernetes.io/projected/07b5d6f6-b886-4992-a758-0f4c72a872bf-kube-api-access-4thj4\") pod \"nova-cell0-cell-mapping-r62f9\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.751571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.759780 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.759819 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2h8\" (UniqueName: \"kubernetes.io/projected/aaa7465a-eef7-4b1d-951f-153168f06d07-kube-api-access-sq2h8\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.759862 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa7465a-eef7-4b1d-951f-153168f06d07-logs\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.759900 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-config-data\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.761300 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.761909 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa7465a-eef7-4b1d-951f-153168f06d07-logs\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.767328 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.775981 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.801086 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-9pzgb"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.802197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.802858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-config-data\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.804015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.826980 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2h8\" (UniqueName: \"kubernetes.io/projected/aaa7465a-eef7-4b1d-951f-153168f06d07-kube-api-access-sq2h8\") pod \"nova-api-0\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.862034 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9pzgb"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.864486 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a21ed79-f865-4e4e-9794-2091bb9565c1-operator-scripts\") pod \"aodh-db-create-9pzgb\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.864547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9c5fb-55e9-4941-92ee-074bb7937135-logs\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.864585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vb7\" (UniqueName: \"kubernetes.io/projected/7a21ed79-f865-4e4e-9794-2091bb9565c1-kube-api-access-x7vb7\") pod \"aodh-db-create-9pzgb\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.864617 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d55jt\" (UniqueName: \"kubernetes.io/projected/51c9c5fb-55e9-4941-92ee-074bb7937135-kube-api-access-d55jt\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.864642 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.864723 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-config-data\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.910380 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.911767 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.928051 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.955501 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972152 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a21ed79-f865-4e4e-9794-2091bb9565c1-operator-scripts\") pod \"aodh-db-create-9pzgb\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/a8d7b398-f0d2-45be-894b-f982f5216512-kube-api-access-qdflb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9c5fb-55e9-4941-92ee-074bb7937135-logs\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vb7\" (UniqueName: \"kubernetes.io/projected/7a21ed79-f865-4e4e-9794-2091bb9565c1-kube-api-access-x7vb7\") pod \"aodh-db-create-9pzgb\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d55jt\" (UniqueName: \"kubernetes.io/projected/51c9c5fb-55e9-4941-92ee-074bb7937135-kube-api-access-d55jt\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972404 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-config-data\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.972510 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.973399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a21ed79-f865-4e4e-9794-2091bb9565c1-operator-scripts\") pod \"aodh-db-create-9pzgb\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:10 crc kubenswrapper[4687]: I0312 16:27:10.973855 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9c5fb-55e9-4941-92ee-074bb7937135-logs\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:10.996053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-config-data\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:10.997572 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.037221 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.048000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d55jt\" (UniqueName: \"kubernetes.io/projected/51c9c5fb-55e9-4941-92ee-074bb7937135-kube-api-access-d55jt\") pod \"nova-metadata-0\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " pod="openstack/nova-metadata-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.082774 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.082914 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.082972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/a8d7b398-f0d2-45be-894b-f982f5216512-kube-api-access-qdflb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.105874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.107749 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.122660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.123009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.130542 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vb7\" (UniqueName: \"kubernetes.io/projected/7a21ed79-f865-4e4e-9794-2091bb9565c1-kube-api-access-x7vb7\") pod \"aodh-db-create-9pzgb\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.137095 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.143970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/a8d7b398-f0d2-45be-894b-f982f5216512-kube-api-access-qdflb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.205007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-config-data\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.205325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdmd\" (UniqueName: \"kubernetes.io/projected/f902afeb-cd68-4dc1-ab83-a5cc86808687-kube-api-access-hhdmd\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.207470 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.283400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.324634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.324719 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-config-data\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.324810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhdmd\" (UniqueName: \"kubernetes.io/projected/f902afeb-cd68-4dc1-ab83-a5cc86808687-kube-api-access-hhdmd\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.328944 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-d8b1-account-create-update-7jbm7"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.357461 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.361413 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.361465 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-config-data\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.362633 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.363245 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.373897 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.374099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhdmd\" (UniqueName: \"kubernetes.io/projected/f902afeb-cd68-4dc1-ab83-a5cc86808687-kube-api-access-hhdmd\") pod \"nova-scheduler-0\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.403071 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-p5bmc"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.408434 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.416895 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.432945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2d3452-ab24-443c-addf-92e8f6ccc55e-operator-scripts\") pod \"aodh-d8b1-account-create-update-7jbm7\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.433248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzvk\" (UniqueName: \"kubernetes.io/projected/8b2d3452-ab24-443c-addf-92e8f6ccc55e-kube-api-access-7kzvk\") pod \"aodh-d8b1-account-create-update-7jbm7\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.466803 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d8b1-account-create-update-7jbm7"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.467110 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.536786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm4j9\" (UniqueName: \"kubernetes.io/projected/4d9aadaa-3727-48e2-b482-86ccb2f809cf-kube-api-access-tm4j9\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.536860 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2d3452-ab24-443c-addf-92e8f6ccc55e-operator-scripts\") pod \"aodh-d8b1-account-create-update-7jbm7\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.536888 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.536982 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.537029 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzvk\" (UniqueName: \"kubernetes.io/projected/8b2d3452-ab24-443c-addf-92e8f6ccc55e-kube-api-access-7kzvk\") pod \"aodh-d8b1-account-create-update-7jbm7\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.537266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-svc\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.537302 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.543568 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-config\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.544725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2d3452-ab24-443c-addf-92e8f6ccc55e-operator-scripts\") pod \"aodh-d8b1-account-create-update-7jbm7\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.567225 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzvk\" (UniqueName: \"kubernetes.io/projected/8b2d3452-ab24-443c-addf-92e8f6ccc55e-kube-api-access-7kzvk\") pod \"aodh-d8b1-account-create-update-7jbm7\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.582837 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-p5bmc"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.646744 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm4j9\" (UniqueName: \"kubernetes.io/projected/4d9aadaa-3727-48e2-b482-86ccb2f809cf-kube-api-access-tm4j9\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.646801 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.646872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.646953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-svc\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.646972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.647007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-config\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.647798 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.648200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-svc\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.648497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.648558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.653918 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-config\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.670666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm4j9\" (UniqueName: \"kubernetes.io/projected/4d9aadaa-3727-48e2-b482-86ccb2f809cf-kube-api-access-tm4j9\") pod \"dnsmasq-dns-9b86998b5-p5bmc\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.704003 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.773426 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-r62f9"] Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.790037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:11 crc kubenswrapper[4687]: I0312 16:27:11.987065 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.595807 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.608801 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r62f9" event={"ID":"07b5d6f6-b886-4992-a758-0f4c72a872bf","Type":"ContainerStarted","Data":"60ec0d8b163501a95d33c7156b7f11d7b75e1dc0c8221bf3ff26af335882207b"} Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.608847 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r62f9" event={"ID":"07b5d6f6-b886-4992-a758-0f4c72a872bf","Type":"ContainerStarted","Data":"f846fdf393fa05c5a926156558a853730f52c4f424d02be1573fdaa081219fc8"} Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.627884 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:12 crc kubenswrapper[4687]: W0312 16:27:12.639500 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288 WatchSource:0}: Error finding container 3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288: Status 404 returned error can't find the container with id 3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288 Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.641704 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9pzgb"] Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.642439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerStarted","Data":"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed"} Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.642519 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.654417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa7465a-eef7-4b1d-951f-153168f06d07","Type":"ContainerStarted","Data":"28481b5fccb007cdb8cd4cc4f57d6fec35bca96c36d14ec1ef7161280252aaba"} Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.659553 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.668277 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-r62f9" podStartSLOduration=2.668258773 podStartE2EDuration="2.668258773s" podCreationTimestamp="2026-03-12 16:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:12.625286489 +0000 UTC m=+1481.589248833" watchObservedRunningTime="2026-03-12 16:27:12.668258773 +0000 UTC m=+1481.632221107" Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.669940 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.095986537 podStartE2EDuration="6.669932828s" podCreationTimestamp="2026-03-12 16:27:06 +0000 UTC" firstStartedPulling="2026-03-12 16:27:07.595937532 +0000 UTC m=+1476.559899876" lastFinishedPulling="2026-03-12 16:27:12.169883823 +0000 UTC m=+1481.133846167" observedRunningTime="2026-03-12 16:27:12.662525279 +0000 UTC m=+1481.626487623" watchObservedRunningTime="2026-03-12 16:27:12.669932828 +0000 UTC m=+1481.633895172" Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.896576 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wnhb5"] Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.898227 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.900510 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wnhb5"] Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.901655 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 16:27:12 crc kubenswrapper[4687]: I0312 16:27:12.904434 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.016135 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.017299 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-scripts\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.017427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-config-data\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.017499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhb4\" (UniqueName: \"kubernetes.io/projected/4d938a4a-a974-450c-b299-2e0e704d0da1-kube-api-access-pwhb4\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.060500 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-p5bmc"] Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.086606 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d8b1-account-create-update-7jbm7"] Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.119934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwhb4\" (UniqueName: \"kubernetes.io/projected/4d938a4a-a974-450c-b299-2e0e704d0da1-kube-api-access-pwhb4\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.120006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.120188 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-scripts\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.120210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-config-data\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.137161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwhb4\" (UniqueName: \"kubernetes.io/projected/4d938a4a-a974-450c-b299-2e0e704d0da1-kube-api-access-pwhb4\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.137421 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.137905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-config-data\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.142862 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-scripts\") pod \"nova-cell1-conductor-db-sync-wnhb5\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.223009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.811025 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8d7b398-f0d2-45be-894b-f982f5216512","Type":"ContainerStarted","Data":"3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.811575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c9c5fb-55e9-4941-92ee-074bb7937135","Type":"ContainerStarted","Data":"d7ba665bd73a9f382cae2cf00a60e97cf515d10bde1d252b5a9daa2e5fe774be"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.811592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f902afeb-cd68-4dc1-ab83-a5cc86808687","Type":"ContainerStarted","Data":"12f494a8d78f9c2e35325fa70cbc41bb34418c1b3d97e50c980563183bd37585"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.827749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9pzgb" event={"ID":"7a21ed79-f865-4e4e-9794-2091bb9565c1","Type":"ContainerStarted","Data":"ec3b2dee10f12a712be8aea09f50352fb8204e230cee7434704f549cd6477aba"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.827792 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9pzgb" event={"ID":"7a21ed79-f865-4e4e-9794-2091bb9565c1","Type":"ContainerStarted","Data":"cb149d8fe2dff3961d921f1ad37acc62c2eb79116fadf85304d266eb0cd366c4"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.843299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" event={"ID":"4d9aadaa-3727-48e2-b482-86ccb2f809cf","Type":"ContainerStarted","Data":"32a24639cd98ff058a4b73b0faba1ca3dfbbd36de06f6dc42e3f3dbc307976a6"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.883881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d8b1-account-create-update-7jbm7" event={"ID":"8b2d3452-ab24-443c-addf-92e8f6ccc55e","Type":"ContainerStarted","Data":"e72fd9d0fbd6bbbb1f06f66d23ed69761a421d09bb8ca6db94b35eb57d3e859d"} Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.887978 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-9pzgb" podStartSLOduration=3.887956159 podStartE2EDuration="3.887956159s" podCreationTimestamp="2026-03-12 16:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:13.884779664 +0000 UTC m=+1482.848742118" watchObservedRunningTime="2026-03-12 16:27:13.887956159 +0000 UTC m=+1482.851918503" Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.934600 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wnhb5"] Mar 12 16:27:13 crc kubenswrapper[4687]: I0312 16:27:13.956665 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-d8b1-account-create-update-7jbm7" podStartSLOduration=2.956646233 podStartE2EDuration="2.956646233s" podCreationTimestamp="2026-03-12 16:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:13.918837068 +0000 UTC m=+1482.882799412" watchObservedRunningTime="2026-03-12 16:27:13.956646233 +0000 UTC m=+1482.920608577" Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.122960 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.123023 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.915007 4687 generic.go:334] "Generic (PLEG): container finished" podID="7a21ed79-f865-4e4e-9794-2091bb9565c1" containerID="ec3b2dee10f12a712be8aea09f50352fb8204e230cee7434704f549cd6477aba" exitCode=0 Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.915306 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9pzgb" event={"ID":"7a21ed79-f865-4e4e-9794-2091bb9565c1","Type":"ContainerDied","Data":"ec3b2dee10f12a712be8aea09f50352fb8204e230cee7434704f549cd6477aba"} Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.934224 4687 generic.go:334] "Generic (PLEG): container finished" podID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerID="44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba" exitCode=0 Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.934452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" event={"ID":"4d9aadaa-3727-48e2-b482-86ccb2f809cf","Type":"ContainerDied","Data":"44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba"} Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.956726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" event={"ID":"4d938a4a-a974-450c-b299-2e0e704d0da1","Type":"ContainerStarted","Data":"5ef7dd16879245f379b914e7a034d8b5ecc587e1346e7681fe13657401e21bce"} Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.956986 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" event={"ID":"4d938a4a-a974-450c-b299-2e0e704d0da1","Type":"ContainerStarted","Data":"38c72697d400536caa8b825044a62e9fb055abaf4c1072df518157d2d0baeb7a"} Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.969376 4687 generic.go:334] "Generic (PLEG): container finished" podID="8b2d3452-ab24-443c-addf-92e8f6ccc55e" containerID="f0fcbeb2a84d714f1355040dda6e44a58d0236aa9e7891a9562c8a804021ea19" exitCode=0 Mar 12 16:27:14 crc kubenswrapper[4687]: I0312 16:27:14.969659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d8b1-account-create-update-7jbm7" event={"ID":"8b2d3452-ab24-443c-addf-92e8f6ccc55e","Type":"ContainerDied","Data":"f0fcbeb2a84d714f1355040dda6e44a58d0236aa9e7891a9562c8a804021ea19"} Mar 12 16:27:15 crc kubenswrapper[4687]: I0312 16:27:15.080542 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" podStartSLOduration=3.080522257 podStartE2EDuration="3.080522257s" podCreationTimestamp="2026-03-12 16:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:15.014849384 +0000 UTC m=+1483.978811728" watchObservedRunningTime="2026-03-12 16:27:15.080522257 +0000 UTC m=+1484.044484601" Mar 12 16:27:15 crc kubenswrapper[4687]: I0312 16:27:15.200757 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:15 crc kubenswrapper[4687]: I0312 16:27:15.229987 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.298464 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.304342 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.461842 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2d3452-ab24-443c-addf-92e8f6ccc55e-operator-scripts\") pod \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.461892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a21ed79-f865-4e4e-9794-2091bb9565c1-operator-scripts\") pod \"7a21ed79-f865-4e4e-9794-2091bb9565c1\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.461950 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzvk\" (UniqueName: \"kubernetes.io/projected/8b2d3452-ab24-443c-addf-92e8f6ccc55e-kube-api-access-7kzvk\") pod \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\" (UID: \"8b2d3452-ab24-443c-addf-92e8f6ccc55e\") " Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.462081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7vb7\" (UniqueName: \"kubernetes.io/projected/7a21ed79-f865-4e4e-9794-2091bb9565c1-kube-api-access-x7vb7\") pod \"7a21ed79-f865-4e4e-9794-2091bb9565c1\" (UID: \"7a21ed79-f865-4e4e-9794-2091bb9565c1\") " Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.464074 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2d3452-ab24-443c-addf-92e8f6ccc55e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b2d3452-ab24-443c-addf-92e8f6ccc55e" (UID: "8b2d3452-ab24-443c-addf-92e8f6ccc55e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.464117 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a21ed79-f865-4e4e-9794-2091bb9565c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a21ed79-f865-4e4e-9794-2091bb9565c1" (UID: "7a21ed79-f865-4e4e-9794-2091bb9565c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.468967 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2d3452-ab24-443c-addf-92e8f6ccc55e-kube-api-access-7kzvk" (OuterVolumeSpecName: "kube-api-access-7kzvk") pod "8b2d3452-ab24-443c-addf-92e8f6ccc55e" (UID: "8b2d3452-ab24-443c-addf-92e8f6ccc55e"). InnerVolumeSpecName "kube-api-access-7kzvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.481737 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a21ed79-f865-4e4e-9794-2091bb9565c1-kube-api-access-x7vb7" (OuterVolumeSpecName: "kube-api-access-x7vb7") pod "7a21ed79-f865-4e4e-9794-2091bb9565c1" (UID: "7a21ed79-f865-4e4e-9794-2091bb9565c1"). InnerVolumeSpecName "kube-api-access-x7vb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.566058 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2d3452-ab24-443c-addf-92e8f6ccc55e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.566120 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a21ed79-f865-4e4e-9794-2091bb9565c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.566135 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzvk\" (UniqueName: \"kubernetes.io/projected/8b2d3452-ab24-443c-addf-92e8f6ccc55e-kube-api-access-7kzvk\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:17 crc kubenswrapper[4687]: I0312 16:27:17.566149 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7vb7\" (UniqueName: \"kubernetes.io/projected/7a21ed79-f865-4e4e-9794-2091bb9565c1-kube-api-access-x7vb7\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.015418 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d8b1-account-create-update-7jbm7" Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.015406 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d8b1-account-create-update-7jbm7" event={"ID":"8b2d3452-ab24-443c-addf-92e8f6ccc55e","Type":"ContainerDied","Data":"e72fd9d0fbd6bbbb1f06f66d23ed69761a421d09bb8ca6db94b35eb57d3e859d"} Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.015852 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e72fd9d0fbd6bbbb1f06f66d23ed69761a421d09bb8ca6db94b35eb57d3e859d" Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.017890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9pzgb" event={"ID":"7a21ed79-f865-4e4e-9794-2091bb9565c1","Type":"ContainerDied","Data":"cb149d8fe2dff3961d921f1ad37acc62c2eb79116fadf85304d266eb0cd366c4"} Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.017921 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb149d8fe2dff3961d921f1ad37acc62c2eb79116fadf85304d266eb0cd366c4" Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.017980 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9pzgb" Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.023051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" event={"ID":"4d9aadaa-3727-48e2-b482-86ccb2f809cf","Type":"ContainerStarted","Data":"8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8"} Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.024121 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:18 crc kubenswrapper[4687]: I0312 16:27:18.046262 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" podStartSLOduration=7.04624377 podStartE2EDuration="7.04624377s" podCreationTimestamp="2026-03-12 16:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:18.044624877 +0000 UTC m=+1487.008587241" watchObservedRunningTime="2026-03-12 16:27:18.04624377 +0000 UTC m=+1487.010206114" Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.035233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8d7b398-f0d2-45be-894b-f982f5216512","Type":"ContainerStarted","Data":"736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab"} Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.035754 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a8d7b398-f0d2-45be-894b-f982f5216512" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab" gracePeriod=30 Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.037873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa7465a-eef7-4b1d-951f-153168f06d07","Type":"ContainerStarted","Data":"60f3d47b339beec67f00c3939ae89307245941a9b8d8c7d7847eb76bc7b139e2"} Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.037904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa7465a-eef7-4b1d-951f-153168f06d07","Type":"ContainerStarted","Data":"cf3b36ba80f50b2fa69822d32fb1d6ec8bed549ad4a2644c7fd8da4099801d13"} Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.048684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c9c5fb-55e9-4941-92ee-074bb7937135","Type":"ContainerStarted","Data":"e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f"} Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.048729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c9c5fb-55e9-4941-92ee-074bb7937135","Type":"ContainerStarted","Data":"639f8ed1837a3b804abbcf73651b34d8b826a2e5d6b1c680d1e8ae7c7691a712"} Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.048841 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-log" containerID="cri-o://639f8ed1837a3b804abbcf73651b34d8b826a2e5d6b1c680d1e8ae7c7691a712" gracePeriod=30 Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.048928 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-metadata" containerID="cri-o://e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f" gracePeriod=30 Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.055165 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.45149974 podStartE2EDuration="9.055144466s" podCreationTimestamp="2026-03-12 16:27:10 +0000 UTC" firstStartedPulling="2026-03-12 16:27:12.64990159 +0000 UTC m=+1481.613863934" lastFinishedPulling="2026-03-12 16:27:18.253546316 +0000 UTC m=+1487.217508660" observedRunningTime="2026-03-12 16:27:19.051483629 +0000 UTC m=+1488.015445973" watchObservedRunningTime="2026-03-12 16:27:19.055144466 +0000 UTC m=+1488.019106810" Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.063788 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f902afeb-cd68-4dc1-ab83-a5cc86808687","Type":"ContainerStarted","Data":"89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b"} Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.085745 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.45295834 podStartE2EDuration="9.085536342s" podCreationTimestamp="2026-03-12 16:27:10 +0000 UTC" firstStartedPulling="2026-03-12 16:27:12.620961723 +0000 UTC m=+1481.584924067" lastFinishedPulling="2026-03-12 16:27:18.253539735 +0000 UTC m=+1487.217502069" observedRunningTime="2026-03-12 16:27:19.081554945 +0000 UTC m=+1488.045517289" watchObservedRunningTime="2026-03-12 16:27:19.085536342 +0000 UTC m=+1488.049498686" Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.112524 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.881939609 podStartE2EDuration="9.112504106s" podCreationTimestamp="2026-03-12 16:27:10 +0000 UTC" firstStartedPulling="2026-03-12 16:27:12.025570788 +0000 UTC m=+1480.989533132" lastFinishedPulling="2026-03-12 16:27:18.256135285 +0000 UTC m=+1487.220097629" observedRunningTime="2026-03-12 16:27:19.099561069 +0000 UTC m=+1488.063523413" watchObservedRunningTime="2026-03-12 16:27:19.112504106 +0000 UTC m=+1488.076466450" Mar 12 16:27:19 crc kubenswrapper[4687]: I0312 16:27:19.140044 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.510256528 podStartE2EDuration="9.140023845s" podCreationTimestamp="2026-03-12 16:27:10 +0000 UTC" firstStartedPulling="2026-03-12 16:27:12.621502247 +0000 UTC m=+1481.585464591" lastFinishedPulling="2026-03-12 16:27:18.251269564 +0000 UTC m=+1487.215231908" observedRunningTime="2026-03-12 16:27:19.133478579 +0000 UTC m=+1488.097440923" watchObservedRunningTime="2026-03-12 16:27:19.140023845 +0000 UTC m=+1488.103986189" Mar 12 16:27:20 crc kubenswrapper[4687]: I0312 16:27:20.075272 4687 generic.go:334] "Generic (PLEG): container finished" podID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerID="639f8ed1837a3b804abbcf73651b34d8b826a2e5d6b1c680d1e8ae7c7691a712" exitCode=143 Mar 12 16:27:20 crc kubenswrapper[4687]: I0312 16:27:20.076606 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c9c5fb-55e9-4941-92ee-074bb7937135","Type":"ContainerDied","Data":"639f8ed1837a3b804abbcf73651b34d8b826a2e5d6b1c680d1e8ae7c7691a712"} Mar 12 16:27:20 crc kubenswrapper[4687]: I0312 16:27:20.928683 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 16:27:20 crc kubenswrapper[4687]: I0312 16:27:20.929020 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.344935 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-wblmk"] Mar 12 16:27:21 crc kubenswrapper[4687]: E0312 16:27:21.345759 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a21ed79-f865-4e4e-9794-2091bb9565c1" containerName="mariadb-database-create" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.345772 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a21ed79-f865-4e4e-9794-2091bb9565c1" containerName="mariadb-database-create" Mar 12 16:27:21 crc kubenswrapper[4687]: E0312 16:27:21.345785 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2d3452-ab24-443c-addf-92e8f6ccc55e" containerName="mariadb-account-create-update" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.345792 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2d3452-ab24-443c-addf-92e8f6ccc55e" containerName="mariadb-account-create-update" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.346013 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2d3452-ab24-443c-addf-92e8f6ccc55e" containerName="mariadb-account-create-update" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.346033 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a21ed79-f865-4e4e-9794-2091bb9565c1" containerName="mariadb-database-create" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.346843 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.348990 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.349302 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.349417 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.356636 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-w4khk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.357197 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wblmk"] Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.364569 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.364609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.409462 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.413813 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5787j\" (UniqueName: \"kubernetes.io/projected/9201a49a-2a15-400d-ab4f-20f55725f719-kube-api-access-5787j\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.414135 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-scripts\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.414331 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-combined-ca-bundle\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.414409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-config-data\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.468887 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.468943 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.514246 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.516492 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-scripts\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.516638 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-combined-ca-bundle\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.516685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-config-data\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.516886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5787j\" (UniqueName: \"kubernetes.io/projected/9201a49a-2a15-400d-ab4f-20f55725f719-kube-api-access-5787j\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.523559 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-combined-ca-bundle\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.524584 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-config-data\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.530972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-scripts\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.558237 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5787j\" (UniqueName: \"kubernetes.io/projected/9201a49a-2a15-400d-ab4f-20f55725f719-kube-api-access-5787j\") pod \"aodh-db-sync-wblmk\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:21 crc kubenswrapper[4687]: I0312 16:27:21.672532 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:22 crc kubenswrapper[4687]: I0312 16:27:22.012756 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:27:22 crc kubenswrapper[4687]: I0312 16:27:22.013314 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:27:22 crc kubenswrapper[4687]: I0312 16:27:22.160464 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 16:27:22 crc kubenswrapper[4687]: I0312 16:27:22.282500 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wblmk"] Mar 12 16:27:23 crc kubenswrapper[4687]: I0312 16:27:23.122419 4687 generic.go:334] "Generic (PLEG): container finished" podID="07b5d6f6-b886-4992-a758-0f4c72a872bf" containerID="60ec0d8b163501a95d33c7156b7f11d7b75e1dc0c8221bf3ff26af335882207b" exitCode=0 Mar 12 16:27:23 crc kubenswrapper[4687]: I0312 16:27:23.122476 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r62f9" event={"ID":"07b5d6f6-b886-4992-a758-0f4c72a872bf","Type":"ContainerDied","Data":"60ec0d8b163501a95d33c7156b7f11d7b75e1dc0c8221bf3ff26af335882207b"} Mar 12 16:27:23 crc kubenswrapper[4687]: I0312 16:27:23.124488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wblmk" event={"ID":"9201a49a-2a15-400d-ab4f-20f55725f719","Type":"ContainerStarted","Data":"888b4878d1ff7a52007a14287ee41512344552f580d8721ee6ed8c698c4fcf50"} Mar 12 16:27:24 crc kubenswrapper[4687]: I0312 16:27:24.141157 4687 generic.go:334] "Generic (PLEG): container finished" podID="4d938a4a-a974-450c-b299-2e0e704d0da1" containerID="5ef7dd16879245f379b914e7a034d8b5ecc587e1346e7681fe13657401e21bce" exitCode=0 Mar 12 16:27:24 crc kubenswrapper[4687]: I0312 16:27:24.141260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" event={"ID":"4d938a4a-a974-450c-b299-2e0e704d0da1","Type":"ContainerDied","Data":"5ef7dd16879245f379b914e7a034d8b5ecc587e1346e7681fe13657401e21bce"} Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.508234 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.552205 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-config-data\") pod \"07b5d6f6-b886-4992-a758-0f4c72a872bf\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.552314 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thj4\" (UniqueName: \"kubernetes.io/projected/07b5d6f6-b886-4992-a758-0f4c72a872bf-kube-api-access-4thj4\") pod \"07b5d6f6-b886-4992-a758-0f4c72a872bf\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.552686 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-scripts\") pod \"07b5d6f6-b886-4992-a758-0f4c72a872bf\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.552764 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-combined-ca-bundle\") pod \"07b5d6f6-b886-4992-a758-0f4c72a872bf\" (UID: \"07b5d6f6-b886-4992-a758-0f4c72a872bf\") " Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.558114 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-scripts" (OuterVolumeSpecName: "scripts") pod "07b5d6f6-b886-4992-a758-0f4c72a872bf" (UID: "07b5d6f6-b886-4992-a758-0f4c72a872bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.559573 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b5d6f6-b886-4992-a758-0f4c72a872bf-kube-api-access-4thj4" (OuterVolumeSpecName: "kube-api-access-4thj4") pod "07b5d6f6-b886-4992-a758-0f4c72a872bf" (UID: "07b5d6f6-b886-4992-a758-0f4c72a872bf"). InnerVolumeSpecName "kube-api-access-4thj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.595262 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-config-data" (OuterVolumeSpecName: "config-data") pod "07b5d6f6-b886-4992-a758-0f4c72a872bf" (UID: "07b5d6f6-b886-4992-a758-0f4c72a872bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.597092 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b5d6f6-b886-4992-a758-0f4c72a872bf" (UID: "07b5d6f6-b886-4992-a758-0f4c72a872bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.655484 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.655521 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.655964 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b5d6f6-b886-4992-a758-0f4c72a872bf-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.655991 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thj4\" (UniqueName: \"kubernetes.io/projected/07b5d6f6-b886-4992-a758-0f4c72a872bf-kube-api-access-4thj4\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.792979 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.884768 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rn4lx"] Mar 12 16:27:26 crc kubenswrapper[4687]: I0312 16:27:26.885001 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="dnsmasq-dns" containerID="cri-o://4ad76e6ccbecde7321398d99d746f5c31d0cd22a82d29667e2e3399f8a7e2f9f" gracePeriod=10 Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.181418 4687 generic.go:334] "Generic (PLEG): container finished" podID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerID="4ad76e6ccbecde7321398d99d746f5c31d0cd22a82d29667e2e3399f8a7e2f9f" exitCode=0 Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.181488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" event={"ID":"d3d4b722-5a0f-4e77-b3de-db84d696b1e4","Type":"ContainerDied","Data":"4ad76e6ccbecde7321398d99d746f5c31d0cd22a82d29667e2e3399f8a7e2f9f"} Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.185985 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r62f9" event={"ID":"07b5d6f6-b886-4992-a758-0f4c72a872bf","Type":"ContainerDied","Data":"f846fdf393fa05c5a926156558a853730f52c4f424d02be1573fdaa081219fc8"} Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.186029 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f846fdf393fa05c5a926156558a853730f52c4f424d02be1573fdaa081219fc8" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.186089 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r62f9" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.236010 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.279739 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-scripts\") pod \"4d938a4a-a974-450c-b299-2e0e704d0da1\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.279826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwhb4\" (UniqueName: \"kubernetes.io/projected/4d938a4a-a974-450c-b299-2e0e704d0da1-kube-api-access-pwhb4\") pod \"4d938a4a-a974-450c-b299-2e0e704d0da1\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.279863 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-combined-ca-bundle\") pod \"4d938a4a-a974-450c-b299-2e0e704d0da1\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.279926 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-config-data\") pod \"4d938a4a-a974-450c-b299-2e0e704d0da1\" (UID: \"4d938a4a-a974-450c-b299-2e0e704d0da1\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.283813 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-scripts" (OuterVolumeSpecName: "scripts") pod "4d938a4a-a974-450c-b299-2e0e704d0da1" (UID: "4d938a4a-a974-450c-b299-2e0e704d0da1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.285713 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.295590 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d938a4a-a974-450c-b299-2e0e704d0da1-kube-api-access-pwhb4" (OuterVolumeSpecName: "kube-api-access-pwhb4") pod "4d938a4a-a974-450c-b299-2e0e704d0da1" (UID: "4d938a4a-a974-450c-b299-2e0e704d0da1"). InnerVolumeSpecName "kube-api-access-pwhb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.321842 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d938a4a-a974-450c-b299-2e0e704d0da1" (UID: "4d938a4a-a974-450c-b299-2e0e704d0da1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.335451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-config-data" (OuterVolumeSpecName: "config-data") pod "4d938a4a-a974-450c-b299-2e0e704d0da1" (UID: "4d938a4a-a974-450c-b299-2e0e704d0da1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.388257 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwhb4\" (UniqueName: \"kubernetes.io/projected/4d938a4a-a974-450c-b299-2e0e704d0da1-kube-api-access-pwhb4\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.388290 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.388300 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d938a4a-a974-450c-b299-2e0e704d0da1-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.722760 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.723307 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-log" containerID="cri-o://cf3b36ba80f50b2fa69822d32fb1d6ec8bed549ad4a2644c7fd8da4099801d13" gracePeriod=30 Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.723445 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-api" containerID="cri-o://60f3d47b339beec67f00c3939ae89307245941a9b8d8c7d7847eb76bc7b139e2" gracePeriod=30 Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.751128 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.753254 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.753456 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f902afeb-cd68-4dc1-ab83-a5cc86808687" containerName="nova-scheduler-scheduler" containerID="cri-o://89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b" gracePeriod=30 Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.796654 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-nb\") pod \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.797010 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-sb\") pod \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.797187 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-swift-storage-0\") pod \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.797448 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-config\") pod \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.797603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-svc\") pod \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.797848 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq586\" (UniqueName: \"kubernetes.io/projected/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-kube-api-access-vq586\") pod \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\" (UID: \"d3d4b722-5a0f-4e77-b3de-db84d696b1e4\") " Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.809546 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-kube-api-access-vq586" (OuterVolumeSpecName: "kube-api-access-vq586") pod "d3d4b722-5a0f-4e77-b3de-db84d696b1e4" (UID: "d3d4b722-5a0f-4e77-b3de-db84d696b1e4"). InnerVolumeSpecName "kube-api-access-vq586". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.887508 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-config" (OuterVolumeSpecName: "config") pod "d3d4b722-5a0f-4e77-b3de-db84d696b1e4" (UID: "d3d4b722-5a0f-4e77-b3de-db84d696b1e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.902351 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq586\" (UniqueName: \"kubernetes.io/projected/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-kube-api-access-vq586\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.902403 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.913862 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3d4b722-5a0f-4e77-b3de-db84d696b1e4" (UID: "d3d4b722-5a0f-4e77-b3de-db84d696b1e4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.914818 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3d4b722-5a0f-4e77-b3de-db84d696b1e4" (UID: "d3d4b722-5a0f-4e77-b3de-db84d696b1e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.936902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3d4b722-5a0f-4e77-b3de-db84d696b1e4" (UID: "d3d4b722-5a0f-4e77-b3de-db84d696b1e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:27 crc kubenswrapper[4687]: I0312 16:27:27.939011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3d4b722-5a0f-4e77-b3de-db84d696b1e4" (UID: "d3d4b722-5a0f-4e77-b3de-db84d696b1e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.004734 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.004787 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.004797 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.004806 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3d4b722-5a0f-4e77-b3de-db84d696b1e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.198654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" event={"ID":"d3d4b722-5a0f-4e77-b3de-db84d696b1e4","Type":"ContainerDied","Data":"153d22eb162b485c5cb20de335a897c7b9f66305d78979957fc5a55a155b15e9"} Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.198701 4687 scope.go:117] "RemoveContainer" containerID="4ad76e6ccbecde7321398d99d746f5c31d0cd22a82d29667e2e3399f8a7e2f9f" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.198736 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.205407 4687 generic.go:334] "Generic (PLEG): container finished" podID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerID="cf3b36ba80f50b2fa69822d32fb1d6ec8bed549ad4a2644c7fd8da4099801d13" exitCode=143 Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.205712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa7465a-eef7-4b1d-951f-153168f06d07","Type":"ContainerDied","Data":"cf3b36ba80f50b2fa69822d32fb1d6ec8bed549ad4a2644c7fd8da4099801d13"} Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.207686 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wblmk" event={"ID":"9201a49a-2a15-400d-ab4f-20f55725f719","Type":"ContainerStarted","Data":"273b0090705c7d23b745fc634676d50822824a34397c5540a0467cace64653ef"} Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.209555 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" event={"ID":"4d938a4a-a974-450c-b299-2e0e704d0da1","Type":"ContainerDied","Data":"38c72697d400536caa8b825044a62e9fb055abaf4c1072df518157d2d0baeb7a"} Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.209576 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c72697d400536caa8b825044a62e9fb055abaf4c1072df518157d2d0baeb7a" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.209640 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wnhb5" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.231686 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-wblmk" podStartSLOduration=2.222716797 podStartE2EDuration="7.231655206s" podCreationTimestamp="2026-03-12 16:27:21 +0000 UTC" firstStartedPulling="2026-03-12 16:27:22.258934441 +0000 UTC m=+1491.222896785" lastFinishedPulling="2026-03-12 16:27:27.26787285 +0000 UTC m=+1496.231835194" observedRunningTime="2026-03-12 16:27:28.225123311 +0000 UTC m=+1497.189085655" watchObservedRunningTime="2026-03-12 16:27:28.231655206 +0000 UTC m=+1497.195617550" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.267812 4687 scope.go:117] "RemoveContainer" containerID="b33987c10d5e834a44105fcd07e880c3bc43048fc4c575a4d77f9b8ff0b312a8" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.320422 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rn4lx"] Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.333557 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-rn4lx"] Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.355418 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 16:27:28 crc kubenswrapper[4687]: E0312 16:27:28.355949 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="dnsmasq-dns" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.355969 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="dnsmasq-dns" Mar 12 16:27:28 crc kubenswrapper[4687]: E0312 16:27:28.355986 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="init" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.355993 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="init" Mar 12 16:27:28 crc kubenswrapper[4687]: E0312 16:27:28.356011 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b5d6f6-b886-4992-a758-0f4c72a872bf" containerName="nova-manage" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.356017 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b5d6f6-b886-4992-a758-0f4c72a872bf" containerName="nova-manage" Mar 12 16:27:28 crc kubenswrapper[4687]: E0312 16:27:28.356037 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d938a4a-a974-450c-b299-2e0e704d0da1" containerName="nova-cell1-conductor-db-sync" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.356043 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d938a4a-a974-450c-b299-2e0e704d0da1" containerName="nova-cell1-conductor-db-sync" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.356257 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b5d6f6-b886-4992-a758-0f4c72a872bf" containerName="nova-manage" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.356278 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="dnsmasq-dns" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.356291 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d938a4a-a974-450c-b299-2e0e704d0da1" containerName="nova-cell1-conductor-db-sync" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.357289 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.360424 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.367021 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.424397 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvt64\" (UniqueName: \"kubernetes.io/projected/3d74af86-ee5f-4dda-b580-f199b1841b46-kube-api-access-cvt64\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.424615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d74af86-ee5f-4dda-b580-f199b1841b46-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.424733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d74af86-ee5f-4dda-b580-f199b1841b46-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.527225 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvt64\" (UniqueName: \"kubernetes.io/projected/3d74af86-ee5f-4dda-b580-f199b1841b46-kube-api-access-cvt64\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.527275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d74af86-ee5f-4dda-b580-f199b1841b46-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.527301 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d74af86-ee5f-4dda-b580-f199b1841b46-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.531939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d74af86-ee5f-4dda-b580-f199b1841b46-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.536844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d74af86-ee5f-4dda-b580-f199b1841b46-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.546336 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvt64\" (UniqueName: \"kubernetes.io/projected/3d74af86-ee5f-4dda-b580-f199b1841b46-kube-api-access-cvt64\") pod \"nova-cell1-conductor-0\" (UID: \"3d74af86-ee5f-4dda-b580-f199b1841b46\") " pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:28 crc kubenswrapper[4687]: I0312 16:27:28.742216 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:29 crc kubenswrapper[4687]: I0312 16:27:29.234860 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 16:27:29 crc kubenswrapper[4687]: W0312 16:27:29.245739 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d74af86_ee5f_4dda_b580_f199b1841b46.slice/crio-c3ac950a3afa21fee4fd54746967199a96cb88764f26f9bb74b5d4e7b97f1c8a WatchSource:0}: Error finding container c3ac950a3afa21fee4fd54746967199a96cb88764f26f9bb74b5d4e7b97f1c8a: Status 404 returned error can't find the container with id c3ac950a3afa21fee4fd54746967199a96cb88764f26f9bb74b5d4e7b97f1c8a Mar 12 16:27:29 crc kubenswrapper[4687]: I0312 16:27:29.745890 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" path="/var/lib/kubelet/pods/d3d4b722-5a0f-4e77-b3de-db84d696b1e4/volumes" Mar 12 16:27:30 crc kubenswrapper[4687]: I0312 16:27:30.238554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d74af86-ee5f-4dda-b580-f199b1841b46","Type":"ContainerStarted","Data":"b8ee33cda44df870e6e0f4929d886438a1a2a16cb78da911d02f8e28904a4019"} Mar 12 16:27:30 crc kubenswrapper[4687]: I0312 16:27:30.239234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d74af86-ee5f-4dda-b580-f199b1841b46","Type":"ContainerStarted","Data":"c3ac950a3afa21fee4fd54746967199a96cb88764f26f9bb74b5d4e7b97f1c8a"} Mar 12 16:27:30 crc kubenswrapper[4687]: I0312 16:27:30.239426 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:30 crc kubenswrapper[4687]: I0312 16:27:30.241794 4687 generic.go:334] "Generic (PLEG): container finished" podID="9201a49a-2a15-400d-ab4f-20f55725f719" containerID="273b0090705c7d23b745fc634676d50822824a34397c5540a0467cace64653ef" exitCode=0 Mar 12 16:27:30 crc kubenswrapper[4687]: I0312 16:27:30.241839 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wblmk" event={"ID":"9201a49a-2a15-400d-ab4f-20f55725f719","Type":"ContainerDied","Data":"273b0090705c7d23b745fc634676d50822824a34397c5540a0467cace64653ef"} Mar 12 16:27:30 crc kubenswrapper[4687]: I0312 16:27:30.264049 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.264031071 podStartE2EDuration="2.264031071s" podCreationTimestamp="2026-03-12 16:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:30.253759154 +0000 UTC m=+1499.217721498" watchObservedRunningTime="2026-03-12 16:27:30.264031071 +0000 UTC m=+1499.227993415" Mar 12 16:27:31 crc kubenswrapper[4687]: E0312 16:27:31.009918 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice/crio-conmon-89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice/crio-89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.278638 4687 generic.go:334] "Generic (PLEG): container finished" podID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerID="60f3d47b339beec67f00c3939ae89307245941a9b8d8c7d7847eb76bc7b139e2" exitCode=0 Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.278760 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa7465a-eef7-4b1d-951f-153168f06d07","Type":"ContainerDied","Data":"60f3d47b339beec67f00c3939ae89307245941a9b8d8c7d7847eb76bc7b139e2"} Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.287641 4687 generic.go:334] "Generic (PLEG): container finished" podID="f902afeb-cd68-4dc1-ab83-a5cc86808687" containerID="89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b" exitCode=0 Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.288816 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f902afeb-cd68-4dc1-ab83-a5cc86808687","Type":"ContainerDied","Data":"89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b"} Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.452249 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.506654 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-config-data\") pod \"f902afeb-cd68-4dc1-ab83-a5cc86808687\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.506882 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-combined-ca-bundle\") pod \"f902afeb-cd68-4dc1-ab83-a5cc86808687\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.506966 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhdmd\" (UniqueName: \"kubernetes.io/projected/f902afeb-cd68-4dc1-ab83-a5cc86808687-kube-api-access-hhdmd\") pod \"f902afeb-cd68-4dc1-ab83-a5cc86808687\" (UID: \"f902afeb-cd68-4dc1-ab83-a5cc86808687\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.527196 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f902afeb-cd68-4dc1-ab83-a5cc86808687-kube-api-access-hhdmd" (OuterVolumeSpecName: "kube-api-access-hhdmd") pod "f902afeb-cd68-4dc1-ab83-a5cc86808687" (UID: "f902afeb-cd68-4dc1-ab83-a5cc86808687"). InnerVolumeSpecName "kube-api-access-hhdmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.562563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-config-data" (OuterVolumeSpecName: "config-data") pod "f902afeb-cd68-4dc1-ab83-a5cc86808687" (UID: "f902afeb-cd68-4dc1-ab83-a5cc86808687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.605854 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.610037 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.610065 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhdmd\" (UniqueName: \"kubernetes.io/projected/f902afeb-cd68-4dc1-ab83-a5cc86808687-kube-api-access-hhdmd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.609621 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f902afeb-cd68-4dc1-ab83-a5cc86808687" (UID: "f902afeb-cd68-4dc1-ab83-a5cc86808687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.710729 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-combined-ca-bundle\") pod \"9201a49a-2a15-400d-ab4f-20f55725f719\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.710826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-config-data\") pod \"9201a49a-2a15-400d-ab4f-20f55725f719\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.710861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-scripts\") pod \"9201a49a-2a15-400d-ab4f-20f55725f719\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.711004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5787j\" (UniqueName: \"kubernetes.io/projected/9201a49a-2a15-400d-ab4f-20f55725f719-kube-api-access-5787j\") pod \"9201a49a-2a15-400d-ab4f-20f55725f719\" (UID: \"9201a49a-2a15-400d-ab4f-20f55725f719\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.711649 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902afeb-cd68-4dc1-ab83-a5cc86808687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.713931 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-scripts" (OuterVolumeSpecName: "scripts") pod "9201a49a-2a15-400d-ab4f-20f55725f719" (UID: "9201a49a-2a15-400d-ab4f-20f55725f719"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.714578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9201a49a-2a15-400d-ab4f-20f55725f719-kube-api-access-5787j" (OuterVolumeSpecName: "kube-api-access-5787j") pod "9201a49a-2a15-400d-ab4f-20f55725f719" (UID: "9201a49a-2a15-400d-ab4f-20f55725f719"). InnerVolumeSpecName "kube-api-access-5787j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.745857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-config-data" (OuterVolumeSpecName: "config-data") pod "9201a49a-2a15-400d-ab4f-20f55725f719" (UID: "9201a49a-2a15-400d-ab4f-20f55725f719"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.762117 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9201a49a-2a15-400d-ab4f-20f55725f719" (UID: "9201a49a-2a15-400d-ab4f-20f55725f719"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.797473 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.820303 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.820327 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.820403 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9201a49a-2a15-400d-ab4f-20f55725f719-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.820434 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5787j\" (UniqueName: \"kubernetes.io/projected/9201a49a-2a15-400d-ab4f-20f55725f719-kube-api-access-5787j\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.923442 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2h8\" (UniqueName: \"kubernetes.io/projected/aaa7465a-eef7-4b1d-951f-153168f06d07-kube-api-access-sq2h8\") pod \"aaa7465a-eef7-4b1d-951f-153168f06d07\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.923610 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-combined-ca-bundle\") pod \"aaa7465a-eef7-4b1d-951f-153168f06d07\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.923818 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-config-data\") pod \"aaa7465a-eef7-4b1d-951f-153168f06d07\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.923935 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa7465a-eef7-4b1d-951f-153168f06d07-logs\") pod \"aaa7465a-eef7-4b1d-951f-153168f06d07\" (UID: \"aaa7465a-eef7-4b1d-951f-153168f06d07\") " Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.924926 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaa7465a-eef7-4b1d-951f-153168f06d07-logs" (OuterVolumeSpecName: "logs") pod "aaa7465a-eef7-4b1d-951f-153168f06d07" (UID: "aaa7465a-eef7-4b1d-951f-153168f06d07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.932970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa7465a-eef7-4b1d-951f-153168f06d07-kube-api-access-sq2h8" (OuterVolumeSpecName: "kube-api-access-sq2h8") pod "aaa7465a-eef7-4b1d-951f-153168f06d07" (UID: "aaa7465a-eef7-4b1d-951f-153168f06d07"). InnerVolumeSpecName "kube-api-access-sq2h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.967086 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa7465a-eef7-4b1d-951f-153168f06d07" (UID: "aaa7465a-eef7-4b1d-951f-153168f06d07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:31 crc kubenswrapper[4687]: I0312 16:27:31.988848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-config-data" (OuterVolumeSpecName: "config-data") pod "aaa7465a-eef7-4b1d-951f-153168f06d07" (UID: "aaa7465a-eef7-4b1d-951f-153168f06d07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.028011 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.028046 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaa7465a-eef7-4b1d-951f-153168f06d07-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.028057 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2h8\" (UniqueName: \"kubernetes.io/projected/aaa7465a-eef7-4b1d-951f-153168f06d07-kube-api-access-sq2h8\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.028067 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7465a-eef7-4b1d-951f-153168f06d07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.304535 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.304549 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaa7465a-eef7-4b1d-951f-153168f06d07","Type":"ContainerDied","Data":"28481b5fccb007cdb8cd4cc4f57d6fec35bca96c36d14ec1ef7161280252aaba"} Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.304678 4687 scope.go:117] "RemoveContainer" containerID="60f3d47b339beec67f00c3939ae89307245941a9b8d8c7d7847eb76bc7b139e2" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.307886 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f902afeb-cd68-4dc1-ab83-a5cc86808687","Type":"ContainerDied","Data":"12f494a8d78f9c2e35325fa70cbc41bb34418c1b3d97e50c980563183bd37585"} Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.307943 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.310193 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wblmk" event={"ID":"9201a49a-2a15-400d-ab4f-20f55725f719","Type":"ContainerDied","Data":"888b4878d1ff7a52007a14287ee41512344552f580d8721ee6ed8c698c4fcf50"} Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.310241 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="888b4878d1ff7a52007a14287ee41512344552f580d8721ee6ed8c698c4fcf50" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.310303 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wblmk" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.354239 4687 scope.go:117] "RemoveContainer" containerID="cf3b36ba80f50b2fa69822d32fb1d6ec8bed549ad4a2644c7fd8da4099801d13" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.357287 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.378537 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.382418 4687 scope.go:117] "RemoveContainer" containerID="89defcaa532feb5f1040356f83c3bfc5d16c49f680e99b2718ae78d46a679b8b" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.413986 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.414934 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7756b9d78c-rn4lx" podUID="d3d4b722-5a0f-4e77-b3de-db84d696b1e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.218:5353: i/o timeout" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.430976 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.441595 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: E0312 16:27:32.442167 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-log" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442189 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-log" Mar 12 16:27:32 crc kubenswrapper[4687]: E0312 16:27:32.442215 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9201a49a-2a15-400d-ab4f-20f55725f719" containerName="aodh-db-sync" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442224 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9201a49a-2a15-400d-ab4f-20f55725f719" containerName="aodh-db-sync" Mar 12 16:27:32 crc kubenswrapper[4687]: E0312 16:27:32.442259 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902afeb-cd68-4dc1-ab83-a5cc86808687" containerName="nova-scheduler-scheduler" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442266 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902afeb-cd68-4dc1-ab83-a5cc86808687" containerName="nova-scheduler-scheduler" Mar 12 16:27:32 crc kubenswrapper[4687]: E0312 16:27:32.442280 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-api" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442286 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-api" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442558 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-log" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442596 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" containerName="nova-api-api" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442612 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9201a49a-2a15-400d-ab4f-20f55725f719" containerName="aodh-db-sync" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.442632 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902afeb-cd68-4dc1-ab83-a5cc86808687" containerName="nova-scheduler-scheduler" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.443626 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.445996 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.453027 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.458105 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.461272 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.471388 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.487067 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536667 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536725 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb5c86-04c9-48c0-87f8-0230288ad725-logs\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536754 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mz29\" (UniqueName: \"kubernetes.io/projected/a8bb5c86-04c9-48c0-87f8-0230288ad725-kube-api-access-4mz29\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckbq\" (UniqueName: \"kubernetes.io/projected/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-kube-api-access-bckbq\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-config-data\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.536852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-config-data\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638348 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mz29\" (UniqueName: \"kubernetes.io/projected/a8bb5c86-04c9-48c0-87f8-0230288ad725-kube-api-access-4mz29\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckbq\" (UniqueName: \"kubernetes.io/projected/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-kube-api-access-bckbq\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638515 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-config-data\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-config-data\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.638764 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb5c86-04c9-48c0-87f8-0230288ad725-logs\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.639500 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb5c86-04c9-48c0-87f8-0230288ad725-logs\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.643826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.644840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-config-data\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.644937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-config-data\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.646071 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.656554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bckbq\" (UniqueName: \"kubernetes.io/projected/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-kube-api-access-bckbq\") pod \"nova-scheduler-0\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.660881 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mz29\" (UniqueName: \"kubernetes.io/projected/a8bb5c86-04c9-48c0-87f8-0230288ad725-kube-api-access-4mz29\") pod \"nova-api-0\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " pod="openstack/nova-api-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.771564 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:27:32 crc kubenswrapper[4687]: I0312 16:27:32.790400 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.300242 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.326425 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1261bb11-6314-4f83-98cd-e7d7abaf2d6c","Type":"ContainerStarted","Data":"4d77099dc3fe5a12182eb6e9075289621d39f9a6817021d47bca7f3d9c76e1ed"} Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.365936 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.484853 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.491164 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.498353 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.498370 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.498536 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-w4khk" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.503173 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.563846 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nvk\" (UniqueName: \"kubernetes.io/projected/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-kube-api-access-w4nvk\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.563928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-config-data\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.564021 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-scripts\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.564077 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.666042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-config-data\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.666275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-scripts\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.666399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.666510 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4nvk\" (UniqueName: \"kubernetes.io/projected/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-kube-api-access-w4nvk\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.673682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-config-data\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.679286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-scripts\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.679381 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.692155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4nvk\" (UniqueName: \"kubernetes.io/projected/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-kube-api-access-w4nvk\") pod \"aodh-0\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " pod="openstack/aodh-0" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.767418 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa7465a-eef7-4b1d-951f-153168f06d07" path="/var/lib/kubelet/pods/aaa7465a-eef7-4b1d-951f-153168f06d07/volumes" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.770407 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f902afeb-cd68-4dc1-ab83-a5cc86808687" path="/var/lib/kubelet/pods/f902afeb-cd68-4dc1-ab83-a5cc86808687/volumes" Mar 12 16:27:33 crc kubenswrapper[4687]: I0312 16:27:33.891811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.336791 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8bb5c86-04c9-48c0-87f8-0230288ad725","Type":"ContainerStarted","Data":"5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d"} Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.337229 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8bb5c86-04c9-48c0-87f8-0230288ad725","Type":"ContainerStarted","Data":"92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b"} Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.337239 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8bb5c86-04c9-48c0-87f8-0230288ad725","Type":"ContainerStarted","Data":"cba0b8e9d34688a917b43da8abde36fcfc574d102ed304ea9034848b1b42efc5"} Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.338009 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1261bb11-6314-4f83-98cd-e7d7abaf2d6c","Type":"ContainerStarted","Data":"87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b"} Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.378707 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3786888 podStartE2EDuration="2.3786888s" podCreationTimestamp="2026-03-12 16:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:34.375059993 +0000 UTC m=+1503.339022337" watchObservedRunningTime="2026-03-12 16:27:34.3786888 +0000 UTC m=+1503.342651144" Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.546317 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.54629917 podStartE2EDuration="2.54629917s" podCreationTimestamp="2026-03-12 16:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:34.424768247 +0000 UTC m=+1503.388730591" watchObservedRunningTime="2026-03-12 16:27:34.54629917 +0000 UTC m=+1503.510261504" Mar 12 16:27:34 crc kubenswrapper[4687]: I0312 16:27:34.556713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 16:27:35 crc kubenswrapper[4687]: I0312 16:27:35.356780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerStarted","Data":"83a5358925be1c8290279db0b8c8bbf2cd2fcce4c764f5498a61bd9108b0cc87"} Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.238488 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.239293 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-central-agent" containerID="cri-o://04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" gracePeriod=30 Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.239449 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="proxy-httpd" containerID="cri-o://7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" gracePeriod=30 Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.239497 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="sg-core" containerID="cri-o://ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" gracePeriod=30 Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.239538 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-notification-agent" containerID="cri-o://95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" gracePeriod=30 Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.256351 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.240:3000/\": EOF" Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.377858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerStarted","Data":"b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4"} Mar 12 16:27:36 crc kubenswrapper[4687]: I0312 16:27:36.745265 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.320928 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.381899 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-combined-ca-bundle\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.381974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6nbg\" (UniqueName: \"kubernetes.io/projected/423af688-f49e-42a8-b2d2-623cd01ac948-kube-api-access-b6nbg\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.382008 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-scripts\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.382087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-sg-core-conf-yaml\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.382879 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-log-httpd\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.383290 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.382915 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-config-data\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.383540 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-run-httpd\") pod \"423af688-f49e-42a8-b2d2-623cd01ac948\" (UID: \"423af688-f49e-42a8-b2d2-623cd01ac948\") " Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.384281 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.385278 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.389064 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423af688-f49e-42a8-b2d2-623cd01ac948-kube-api-access-b6nbg" (OuterVolumeSpecName: "kube-api-access-b6nbg") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "kube-api-access-b6nbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.389899 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-scripts" (OuterVolumeSpecName: "scripts") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.393055 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerStarted","Data":"390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371"} Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.396953 4687 generic.go:334] "Generic (PLEG): container finished" podID="423af688-f49e-42a8-b2d2-623cd01ac948" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" exitCode=0 Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.396975 4687 generic.go:334] "Generic (PLEG): container finished" podID="423af688-f49e-42a8-b2d2-623cd01ac948" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" exitCode=2 Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.396982 4687 generic.go:334] "Generic (PLEG): container finished" podID="423af688-f49e-42a8-b2d2-623cd01ac948" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" exitCode=0 Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.396990 4687 generic.go:334] "Generic (PLEG): container finished" podID="423af688-f49e-42a8-b2d2-623cd01ac948" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" exitCode=0 Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397019 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerDied","Data":"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed"} Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397037 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerDied","Data":"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4"} Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerDied","Data":"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22"} Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerDied","Data":"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e"} Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"423af688-f49e-42a8-b2d2-623cd01ac948","Type":"ContainerDied","Data":"a9a549c5a53e7bf20aa6668ff704a1cf31d188e8ee6ea16fd156c0a2b9f31342"} Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397124 4687 scope.go:117] "RemoveContainer" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.397315 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.464524 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.486079 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.486109 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/423af688-f49e-42a8-b2d2-623cd01ac948-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.486118 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6nbg\" (UniqueName: \"kubernetes.io/projected/423af688-f49e-42a8-b2d2-623cd01ac948-kube-api-access-b6nbg\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.486128 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.518060 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.575097 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-config-data" (OuterVolumeSpecName: "config-data") pod "423af688-f49e-42a8-b2d2-623cd01ac948" (UID: "423af688-f49e-42a8-b2d2-623cd01ac948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.587993 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.588027 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/423af688-f49e-42a8-b2d2-623cd01ac948-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.682261 4687 scope.go:117] "RemoveContainer" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.703921 4687 scope.go:117] "RemoveContainer" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.755865 4687 scope.go:117] "RemoveContainer" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.766233 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.774828 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.785253 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.802880 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.803774 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="proxy-httpd" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.803797 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="proxy-httpd" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.803842 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-central-agent" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.803849 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-central-agent" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.803868 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="sg-core" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.803875 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="sg-core" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.803889 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-notification-agent" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.803894 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-notification-agent" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.804181 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="sg-core" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.804202 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-central-agent" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.804227 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="proxy-httpd" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.804235 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="ceilometer-notification-agent" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.806503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.809312 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.811834 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.817635 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.820543 4687 scope.go:117] "RemoveContainer" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.820973 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": container with ID starting with 7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed not found: ID does not exist" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.821100 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed"} err="failed to get container status \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": rpc error: code = NotFound desc = could not find container \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": container with ID starting with 7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.821204 4687 scope.go:117] "RemoveContainer" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.822004 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": container with ID starting with ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4 not found: ID does not exist" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.822066 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4"} err="failed to get container status \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": rpc error: code = NotFound desc = could not find container \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": container with ID starting with ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.822096 4687 scope.go:117] "RemoveContainer" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.823418 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": container with ID starting with 95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22 not found: ID does not exist" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.823550 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22"} err="failed to get container status \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": rpc error: code = NotFound desc = could not find container \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": container with ID starting with 95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.823649 4687 scope.go:117] "RemoveContainer" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" Mar 12 16:27:37 crc kubenswrapper[4687]: E0312 16:27:37.824469 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": container with ID starting with 04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e not found: ID does not exist" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.824504 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e"} err="failed to get container status \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": rpc error: code = NotFound desc = could not find container \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": container with ID starting with 04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.824528 4687 scope.go:117] "RemoveContainer" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.829003 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed"} err="failed to get container status \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": rpc error: code = NotFound desc = could not find container \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": container with ID starting with 7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.829046 4687 scope.go:117] "RemoveContainer" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.830835 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4"} err="failed to get container status \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": rpc error: code = NotFound desc = could not find container \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": container with ID starting with ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.830948 4687 scope.go:117] "RemoveContainer" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.831850 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22"} err="failed to get container status \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": rpc error: code = NotFound desc = could not find container \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": container with ID starting with 95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.831913 4687 scope.go:117] "RemoveContainer" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.833541 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e"} err="failed to get container status \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": rpc error: code = NotFound desc = could not find container \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": container with ID starting with 04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.833568 4687 scope.go:117] "RemoveContainer" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.833811 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed"} err="failed to get container status \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": rpc error: code = NotFound desc = could not find container \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": container with ID starting with 7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.833831 4687 scope.go:117] "RemoveContainer" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.834039 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4"} err="failed to get container status \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": rpc error: code = NotFound desc = could not find container \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": container with ID starting with ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.834061 4687 scope.go:117] "RemoveContainer" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.834290 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22"} err="failed to get container status \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": rpc error: code = NotFound desc = could not find container \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": container with ID starting with 95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.834318 4687 scope.go:117] "RemoveContainer" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.835125 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e"} err="failed to get container status \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": rpc error: code = NotFound desc = could not find container \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": container with ID starting with 04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.835149 4687 scope.go:117] "RemoveContainer" containerID="7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.835550 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed"} err="failed to get container status \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": rpc error: code = NotFound desc = could not find container \"7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed\": container with ID starting with 7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.835575 4687 scope.go:117] "RemoveContainer" containerID="ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.835781 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4"} err="failed to get container status \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": rpc error: code = NotFound desc = could not find container \"ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4\": container with ID starting with ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.835800 4687 scope.go:117] "RemoveContainer" containerID="95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.836140 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22"} err="failed to get container status \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": rpc error: code = NotFound desc = could not find container \"95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22\": container with ID starting with 95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22 not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.836163 4687 scope.go:117] "RemoveContainer" containerID="04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.836540 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e"} err="failed to get container status \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": rpc error: code = NotFound desc = could not find container \"04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e\": container with ID starting with 04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e not found: ID does not exist" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.894842 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftwv9"] Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.897206 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.899598 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-log-httpd\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.899920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-scripts\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.899950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-config-data\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.899996 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.900053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-run-httpd\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.900183 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8x49\" (UniqueName: \"kubernetes.io/projected/61303610-0db1-41ff-8ccc-11e4bc9d9498-kube-api-access-f8x49\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.900242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:37 crc kubenswrapper[4687]: I0312 16:27:37.917257 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftwv9"] Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8x49\" (UniqueName: \"kubernetes.io/projected/61303610-0db1-41ff-8ccc-11e4bc9d9498-kube-api-access-f8x49\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002606 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-log-httpd\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002636 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-utilities\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9qr2\" (UniqueName: \"kubernetes.io/projected/c11ced5e-d2bf-469d-ac52-b8e99c304267-kube-api-access-d9qr2\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-scripts\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-config-data\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-catalog-content\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.002955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-run-httpd\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.003552 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-run-httpd\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.005393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-log-httpd\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.009568 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.009622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-scripts\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.009677 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-config-data\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.010203 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.024064 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8x49\" (UniqueName: \"kubernetes.io/projected/61303610-0db1-41ff-8ccc-11e4bc9d9498-kube-api-access-f8x49\") pod \"ceilometer-0\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.105173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-utilities\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.105264 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9qr2\" (UniqueName: \"kubernetes.io/projected/c11ced5e-d2bf-469d-ac52-b8e99c304267-kube-api-access-d9qr2\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.105414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-catalog-content\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.105739 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-utilities\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.105945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-catalog-content\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.124932 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9qr2\" (UniqueName: \"kubernetes.io/projected/c11ced5e-d2bf-469d-ac52-b8e99c304267-kube-api-access-d9qr2\") pod \"community-operators-ftwv9\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.125095 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.240135 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.800418 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 16:27:38 crc kubenswrapper[4687]: I0312 16:27:38.814312 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.078163 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftwv9"] Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.246273 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.422415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerStarted","Data":"229c564209b0bc42f3739a521d382277e9f5aee10385d3465cc49e26c4e1eb17"} Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.424997 4687 generic.go:334] "Generic (PLEG): container finished" podID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerID="d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32" exitCode=0 Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.425101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerDied","Data":"d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32"} Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.425218 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerStarted","Data":"a199f93379e51963349f65b720864077bf4fe78c9bf82349ee2db943fe5be78e"} Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.427903 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerStarted","Data":"5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef"} Mar 12 16:27:39 crc kubenswrapper[4687]: I0312 16:27:39.748199 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" path="/var/lib/kubelet/pods/423af688-f49e-42a8-b2d2-623cd01ac948/volumes" Mar 12 16:27:40 crc kubenswrapper[4687]: I0312 16:27:40.448389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerStarted","Data":"bba3018aad1ce986da36be19d15f869d851b0cc08be58d25ba5217858dcc0813"} Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.461216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerStarted","Data":"dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4"} Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.465717 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerStarted","Data":"a297d8c22eaafc3e99151538d12ee8c1bf348f62cec4da2c101391c630be57cc"} Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.465776 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-api" containerID="cri-o://b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4" gracePeriod=30 Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.465831 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-notifier" containerID="cri-o://5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef" gracePeriod=30 Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.465826 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-listener" containerID="cri-o://a297d8c22eaafc3e99151538d12ee8c1bf348f62cec4da2c101391c630be57cc" gracePeriod=30 Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.465852 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-evaluator" containerID="cri-o://390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371" gracePeriod=30 Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.474931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerStarted","Data":"b8b678fda25ea65d2061b1da1c9c4fd3bfbb9141fc3f7b03640f1c4ac59070c4"} Mar 12 16:27:41 crc kubenswrapper[4687]: I0312 16:27:41.522426 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.662605121 podStartE2EDuration="8.522403673s" podCreationTimestamp="2026-03-12 16:27:33 +0000 UTC" firstStartedPulling="2026-03-12 16:27:34.565598888 +0000 UTC m=+1503.529561232" lastFinishedPulling="2026-03-12 16:27:40.42539745 +0000 UTC m=+1509.389359784" observedRunningTime="2026-03-12 16:27:41.511753297 +0000 UTC m=+1510.475715681" watchObservedRunningTime="2026-03-12 16:27:41.522403673 +0000 UTC m=+1510.486366017" Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.495502 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerID="390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371" exitCode=0 Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.495915 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerID="b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4" exitCode=0 Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.495658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerDied","Data":"390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371"} Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.500353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerDied","Data":"b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4"} Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.773705 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.791639 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.791698 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 16:27:42 crc kubenswrapper[4687]: I0312 16:27:42.819436 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.512549 4687 generic.go:334] "Generic (PLEG): container finished" podID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerID="dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4" exitCode=0 Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.512637 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerDied","Data":"dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4"} Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.516147 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerID="5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef" exitCode=0 Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.516226 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerDied","Data":"5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef"} Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.521444 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerStarted","Data":"658dd5afd19680a513138894f7a23ba1a561b12dc0618093f6bc84b255d0c193"} Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.568064 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.873590 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:27:43 crc kubenswrapper[4687]: I0312 16:27:43.873590 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:27:44 crc kubenswrapper[4687]: I0312 16:27:44.121911 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:27:44 crc kubenswrapper[4687]: I0312 16:27:44.121968 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:27:44 crc kubenswrapper[4687]: I0312 16:27:44.560047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerStarted","Data":"984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379"} Mar 12 16:27:44 crc kubenswrapper[4687]: I0312 16:27:44.582806 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftwv9" podStartSLOduration=2.8746309439999997 podStartE2EDuration="7.582787177s" podCreationTimestamp="2026-03-12 16:27:37 +0000 UTC" firstStartedPulling="2026-03-12 16:27:39.42676975 +0000 UTC m=+1508.390732094" lastFinishedPulling="2026-03-12 16:27:44.134925983 +0000 UTC m=+1513.098888327" observedRunningTime="2026-03-12 16:27:44.579983481 +0000 UTC m=+1513.543945835" watchObservedRunningTime="2026-03-12 16:27:44.582787177 +0000 UTC m=+1513.546749521" Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.578116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerStarted","Data":"f51dfd22aa2dc3463128d8b790a081172dff2d5b6614d3a4a22346dc6ebf1f62"} Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.578458 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.578320 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-central-agent" containerID="cri-o://bba3018aad1ce986da36be19d15f869d851b0cc08be58d25ba5217858dcc0813" gracePeriod=30 Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.578540 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="sg-core" containerID="cri-o://658dd5afd19680a513138894f7a23ba1a561b12dc0618093f6bc84b255d0c193" gracePeriod=30 Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.578592 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="proxy-httpd" containerID="cri-o://f51dfd22aa2dc3463128d8b790a081172dff2d5b6614d3a4a22346dc6ebf1f62" gracePeriod=30 Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.578634 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-notification-agent" containerID="cri-o://b8b678fda25ea65d2061b1da1c9c4fd3bfbb9141fc3f7b03640f1c4ac59070c4" gracePeriod=30 Mar 12 16:27:45 crc kubenswrapper[4687]: I0312 16:27:45.605794 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.168622536 podStartE2EDuration="8.605745101s" podCreationTimestamp="2026-03-12 16:27:37 +0000 UTC" firstStartedPulling="2026-03-12 16:27:39.249247443 +0000 UTC m=+1508.213209787" lastFinishedPulling="2026-03-12 16:27:44.686370018 +0000 UTC m=+1513.650332352" observedRunningTime="2026-03-12 16:27:45.604174459 +0000 UTC m=+1514.568136823" watchObservedRunningTime="2026-03-12 16:27:45.605745101 +0000 UTC m=+1514.569707445" Mar 12 16:27:46 crc kubenswrapper[4687]: I0312 16:27:46.592156 4687 generic.go:334] "Generic (PLEG): container finished" podID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerID="f51dfd22aa2dc3463128d8b790a081172dff2d5b6614d3a4a22346dc6ebf1f62" exitCode=0 Mar 12 16:27:46 crc kubenswrapper[4687]: I0312 16:27:46.592479 4687 generic.go:334] "Generic (PLEG): container finished" podID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerID="658dd5afd19680a513138894f7a23ba1a561b12dc0618093f6bc84b255d0c193" exitCode=2 Mar 12 16:27:46 crc kubenswrapper[4687]: I0312 16:27:46.592489 4687 generic.go:334] "Generic (PLEG): container finished" podID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerID="b8b678fda25ea65d2061b1da1c9c4fd3bfbb9141fc3f7b03640f1c4ac59070c4" exitCode=0 Mar 12 16:27:46 crc kubenswrapper[4687]: I0312 16:27:46.592236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerDied","Data":"f51dfd22aa2dc3463128d8b790a081172dff2d5b6614d3a4a22346dc6ebf1f62"} Mar 12 16:27:46 crc kubenswrapper[4687]: I0312 16:27:46.592525 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerDied","Data":"658dd5afd19680a513138894f7a23ba1a561b12dc0618093f6bc84b255d0c193"} Mar 12 16:27:46 crc kubenswrapper[4687]: I0312 16:27:46.592540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerDied","Data":"b8b678fda25ea65d2061b1da1c9c4fd3bfbb9141fc3f7b03640f1c4ac59070c4"} Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.607668 4687 generic.go:334] "Generic (PLEG): container finished" podID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerID="bba3018aad1ce986da36be19d15f869d851b0cc08be58d25ba5217858dcc0813" exitCode=0 Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.608319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerDied","Data":"bba3018aad1ce986da36be19d15f869d851b0cc08be58d25ba5217858dcc0813"} Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.804415 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.852662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-run-httpd\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.852826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-log-httpd\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.852873 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8x49\" (UniqueName: \"kubernetes.io/projected/61303610-0db1-41ff-8ccc-11e4bc9d9498-kube-api-access-f8x49\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.852919 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-config-data\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.853004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-combined-ca-bundle\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.853032 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-sg-core-conf-yaml\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.853108 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-scripts\") pod \"61303610-0db1-41ff-8ccc-11e4bc9d9498\" (UID: \"61303610-0db1-41ff-8ccc-11e4bc9d9498\") " Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.853763 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.854079 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.854671 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.865624 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61303610-0db1-41ff-8ccc-11e4bc9d9498-kube-api-access-f8x49" (OuterVolumeSpecName: "kube-api-access-f8x49") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "kube-api-access-f8x49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.874785 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-scripts" (OuterVolumeSpecName: "scripts") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.894508 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.948326 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.957198 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8x49\" (UniqueName: \"kubernetes.io/projected/61303610-0db1-41ff-8ccc-11e4bc9d9498-kube-api-access-f8x49\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.957234 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.957242 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.957251 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.957261 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61303610-0db1-41ff-8ccc-11e4bc9d9498-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:47 crc kubenswrapper[4687]: I0312 16:27:47.981931 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-config-data" (OuterVolumeSpecName: "config-data") pod "61303610-0db1-41ff-8ccc-11e4bc9d9498" (UID: "61303610-0db1-41ff-8ccc-11e4bc9d9498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.059608 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61303610-0db1-41ff-8ccc-11e4bc9d9498-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.240949 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.241228 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.621495 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61303610-0db1-41ff-8ccc-11e4bc9d9498","Type":"ContainerDied","Data":"229c564209b0bc42f3739a521d382277e9f5aee10385d3465cc49e26c4e1eb17"} Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.621552 4687 scope.go:117] "RemoveContainer" containerID="f51dfd22aa2dc3463128d8b790a081172dff2d5b6614d3a4a22346dc6ebf1f62" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.622144 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.660839 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.675697 4687 scope.go:117] "RemoveContainer" containerID="658dd5afd19680a513138894f7a23ba1a561b12dc0618093f6bc84b255d0c193" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.692543 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.720568 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:48 crc kubenswrapper[4687]: E0312 16:27:48.721453 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="proxy-httpd" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.721485 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="proxy-httpd" Mar 12 16:27:48 crc kubenswrapper[4687]: E0312 16:27:48.721526 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-central-agent" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.721540 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-central-agent" Mar 12 16:27:48 crc kubenswrapper[4687]: E0312 16:27:48.721592 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-notification-agent" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.721606 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-notification-agent" Mar 12 16:27:48 crc kubenswrapper[4687]: E0312 16:27:48.721659 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="sg-core" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.721673 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="sg-core" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.722060 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="sg-core" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.722105 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="proxy-httpd" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.722137 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-notification-agent" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.722157 4687 scope.go:117] "RemoveContainer" containerID="b8b678fda25ea65d2061b1da1c9c4fd3bfbb9141fc3f7b03640f1c4ac59070c4" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.722175 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" containerName="ceilometer-central-agent" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.726260 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.729032 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.729565 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.732122 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.766558 4687 scope.go:117] "RemoveContainer" containerID="bba3018aad1ce986da36be19d15f869d851b0cc08be58d25ba5217858dcc0813" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.775917 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-run-httpd\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.775967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.776075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-config-data\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.776349 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-log-httpd\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.776435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.776470 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-scripts\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.776498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxp7\" (UniqueName: \"kubernetes.io/projected/a75b499f-2946-4330-a364-5ef6e733a8da-kube-api-access-mpxp7\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.879061 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-log-httpd\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.879483 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.879669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-scripts\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.879849 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxp7\" (UniqueName: \"kubernetes.io/projected/a75b499f-2946-4330-a364-5ef6e733a8da-kube-api-access-mpxp7\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.880231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-run-httpd\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.880306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.880622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-config-data\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.880675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-run-httpd\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.881184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-log-httpd\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.884227 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.884301 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.884558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-config-data\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.889921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-scripts\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:48 crc kubenswrapper[4687]: I0312 16:27:48.897779 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxp7\" (UniqueName: \"kubernetes.io/projected/a75b499f-2946-4330-a364-5ef6e733a8da-kube-api-access-mpxp7\") pod \"ceilometer-0\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " pod="openstack/ceilometer-0" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.053753 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.087060 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-conmon-b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-conmon-b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4.scope: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.087467 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4.scope: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.094255 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-conmon-390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-conmon-390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371.scope: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.094616 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371.scope: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.097865 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61303610_0db1_41ff_8ccc_11e4bc9d9498.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61303610_0db1_41ff_8ccc_11e4bc9d9498.slice: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.102957 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-conmon-5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-conmon-5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef.scope: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: W0312 16:27:49.103292 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5438880_3f2b_45ec_91ec_5f43b5aa0fe6.slice/crio-5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef.scope: no such file or directory Mar 12 16:27:49 crc kubenswrapper[4687]: E0312 16:27:49.303550 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7465a_eef7_4b1d_951f_153168f06d07.slice/crio-28481b5fccb007cdb8cd4cc4f57d6fec35bca96c36d14ec1ef7161280252aaba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9201a49a_2a15_400d_ab4f_20f55725f719.slice/crio-888b4878d1ff7a52007a14287ee41512344552f580d8721ee6ed8c698c4fcf50\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7465a_eef7_4b1d_951f_153168f06d07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9201a49a_2a15_400d_ab4f_20f55725f719.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice/crio-12f494a8d78f9c2e35325fa70cbc41bb34418c1b3d97e50c980563183bd37585\": RecentStats: unable to find data in memory cache]" Mar 12 16:27:49 crc kubenswrapper[4687]: E0312 16:27:49.304352 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice/crio-12f494a8d78f9c2e35325fa70cbc41bb34418c1b3d97e50c980563183bd37585\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9201a49a_2a15_400d_ab4f_20f55725f719.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7465a_eef7_4b1d_951f_153168f06d07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7465a_eef7_4b1d_951f_153168f06d07.slice/crio-28481b5fccb007cdb8cd4cc4f57d6fec35bca96c36d14ec1ef7161280252aaba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9201a49a_2a15_400d_ab4f_20f55725f719.slice/crio-888b4878d1ff7a52007a14287ee41512344552f580d8721ee6ed8c698c4fcf50\": RecentStats: unable to find data in memory cache]" Mar 12 16:27:49 crc kubenswrapper[4687]: E0312 16:27:49.308477 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9201a49a_2a15_400d_ab4f_20f55725f719.slice/crio-888b4878d1ff7a52007a14287ee41512344552f580d8721ee6ed8c698c4fcf50\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7465a_eef7_4b1d_951f_153168f06d07.slice/crio-28481b5fccb007cdb8cd4cc4f57d6fec35bca96c36d14ec1ef7161280252aaba\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice/crio-12f494a8d78f9c2e35325fa70cbc41bb34418c1b3d97e50c980563183bd37585\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902afeb_cd68_4dc1_ab83_a5cc86808687.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7465a_eef7_4b1d_951f_153168f06d07.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9201a49a_2a15_400d_ab4f_20f55725f719.slice\": RecentStats: unable to find data in memory cache]" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.309451 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ftwv9" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="registry-server" probeResult="failure" output=< Mar 12 16:27:49 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:27:49 crc kubenswrapper[4687]: > Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.642283 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8d7b398-f0d2-45be-894b-f982f5216512" containerID="736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab" exitCode=137 Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.642317 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8d7b398-f0d2-45be-894b-f982f5216512","Type":"ContainerDied","Data":"736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab"} Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.645340 4687 generic.go:334] "Generic (PLEG): container finished" podID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerID="e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f" exitCode=137 Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.645395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c9c5fb-55e9-4941-92ee-074bb7937135","Type":"ContainerDied","Data":"e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f"} Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.645430 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51c9c5fb-55e9-4941-92ee-074bb7937135","Type":"ContainerDied","Data":"d7ba665bd73a9f382cae2cf00a60e97cf515d10bde1d252b5a9daa2e5fe774be"} Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.645443 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ba665bd73a9f382cae2cf00a60e97cf515d10bde1d252b5a9daa2e5fe774be" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.714193 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.762967 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61303610-0db1-41ff-8ccc-11e4bc9d9498" path="/var/lib/kubelet/pods/61303610-0db1-41ff-8ccc-11e4bc9d9498/volumes" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.850217 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d55jt\" (UniqueName: \"kubernetes.io/projected/51c9c5fb-55e9-4941-92ee-074bb7937135-kube-api-access-d55jt\") pod \"51c9c5fb-55e9-4941-92ee-074bb7937135\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.850336 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-config-data\") pod \"51c9c5fb-55e9-4941-92ee-074bb7937135\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.850461 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-combined-ca-bundle\") pod \"51c9c5fb-55e9-4941-92ee-074bb7937135\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.850533 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9c5fb-55e9-4941-92ee-074bb7937135-logs\") pod \"51c9c5fb-55e9-4941-92ee-074bb7937135\" (UID: \"51c9c5fb-55e9-4941-92ee-074bb7937135\") " Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.860795 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c9c5fb-55e9-4941-92ee-074bb7937135-logs" (OuterVolumeSpecName: "logs") pod "51c9c5fb-55e9-4941-92ee-074bb7937135" (UID: "51c9c5fb-55e9-4941-92ee-074bb7937135"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.861054 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c9c5fb-55e9-4941-92ee-074bb7937135-kube-api-access-d55jt" (OuterVolumeSpecName: "kube-api-access-d55jt") pod "51c9c5fb-55e9-4941-92ee-074bb7937135" (UID: "51c9c5fb-55e9-4941-92ee-074bb7937135"). InnerVolumeSpecName "kube-api-access-d55jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.904485 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-config-data" (OuterVolumeSpecName: "config-data") pod "51c9c5fb-55e9-4941-92ee-074bb7937135" (UID: "51c9c5fb-55e9-4941-92ee-074bb7937135"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.917910 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.926842 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.929163 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51c9c5fb-55e9-4941-92ee-074bb7937135" (UID: "51c9c5fb-55e9-4941-92ee-074bb7937135"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.955329 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.955645 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c9c5fb-55e9-4941-92ee-074bb7937135-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.955657 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c9c5fb-55e9-4941-92ee-074bb7937135-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:49 crc kubenswrapper[4687]: I0312 16:27:49.955667 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d55jt\" (UniqueName: \"kubernetes.io/projected/51c9c5fb-55e9-4941-92ee-074bb7937135-kube-api-access-d55jt\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.057252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/a8d7b398-f0d2-45be-894b-f982f5216512-kube-api-access-qdflb\") pod \"a8d7b398-f0d2-45be-894b-f982f5216512\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.057296 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-config-data\") pod \"a8d7b398-f0d2-45be-894b-f982f5216512\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.057327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-combined-ca-bundle\") pod \"a8d7b398-f0d2-45be-894b-f982f5216512\" (UID: \"a8d7b398-f0d2-45be-894b-f982f5216512\") " Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.060563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d7b398-f0d2-45be-894b-f982f5216512-kube-api-access-qdflb" (OuterVolumeSpecName: "kube-api-access-qdflb") pod "a8d7b398-f0d2-45be-894b-f982f5216512" (UID: "a8d7b398-f0d2-45be-894b-f982f5216512"). InnerVolumeSpecName "kube-api-access-qdflb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.102877 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-config-data" (OuterVolumeSpecName: "config-data") pod "a8d7b398-f0d2-45be-894b-f982f5216512" (UID: "a8d7b398-f0d2-45be-894b-f982f5216512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.113185 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8d7b398-f0d2-45be-894b-f982f5216512" (UID: "a8d7b398-f0d2-45be-894b-f982f5216512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.159821 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/a8d7b398-f0d2-45be-894b-f982f5216512-kube-api-access-qdflb\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.159854 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.159864 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d7b398-f0d2-45be-894b-f982f5216512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.669780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerStarted","Data":"534b802982a7aa946d9ea50d3e61ce1c16ca86c14d1354d2144a67f7bb700f83"} Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.672421 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8d7b398-f0d2-45be-894b-f982f5216512","Type":"ContainerDied","Data":"3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288"} Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.672451 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.672486 4687 scope.go:117] "RemoveContainer" containerID="736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.674546 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.768629 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.804910 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.824432 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.833278 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.846371 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: E0312 16:27:50.847099 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-log" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.847120 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-log" Mar 12 16:27:50 crc kubenswrapper[4687]: E0312 16:27:50.847143 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d7b398-f0d2-45be-894b-f982f5216512" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.847150 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d7b398-f0d2-45be-894b-f982f5216512" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 16:27:50 crc kubenswrapper[4687]: E0312 16:27:50.847168 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-metadata" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.847173 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-metadata" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.847452 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d7b398-f0d2-45be-894b-f982f5216512" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.847476 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-metadata" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.847498 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" containerName="nova-metadata-log" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.848411 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.850309 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.852875 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.852897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.858647 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.872032 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.875301 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.877711 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.881842 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.881973 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.988623 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-config-data\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.989091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.989247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjv85\" (UniqueName: \"kubernetes.io/projected/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-kube-api-access-zjv85\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.989566 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.989773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.989967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twdjl\" (UniqueName: \"kubernetes.io/projected/8409fe01-316a-4d7b-88a5-d8a454d3962f-kube-api-access-twdjl\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.990166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.990336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.990543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8409fe01-316a-4d7b-88a5-d8a454d3962f-logs\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:50 crc kubenswrapper[4687]: I0312 16:27:50.990735 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-config-data\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjv85\" (UniqueName: \"kubernetes.io/projected/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-kube-api-access-zjv85\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093865 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twdjl\" (UniqueName: \"kubernetes.io/projected/8409fe01-316a-4d7b-88a5-d8a454d3962f-kube-api-access-twdjl\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.093904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.095187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.095218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8409fe01-316a-4d7b-88a5-d8a454d3962f-logs\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.095268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.096473 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8409fe01-316a-4d7b-88a5-d8a454d3962f-logs\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.100007 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.100778 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.101204 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.104903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.110326 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.113674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.113703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twdjl\" (UniqueName: \"kubernetes.io/projected/8409fe01-316a-4d7b-88a5-d8a454d3962f-kube-api-access-twdjl\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.118572 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-config-data\") pod \"nova-metadata-0\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.123856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjv85\" (UniqueName: \"kubernetes.io/projected/45b04c92-5e49-4aaa-937d-1bbf1339cbfb-kube-api-access-zjv85\") pod \"nova-cell1-novncproxy-0\" (UID: \"45b04c92-5e49-4aaa-937d-1bbf1339cbfb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.180481 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.201526 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.665685 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.686508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerStarted","Data":"ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b"} Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.686569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerStarted","Data":"a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877"} Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.689384 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"45b04c92-5e49-4aaa-937d-1bbf1339cbfb","Type":"ContainerStarted","Data":"8577281e38766a0066bffef62405601de27ac6f98b9d4b7879f7cf6be2b75790"} Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.813235 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c9c5fb-55e9-4941-92ee-074bb7937135" path="/var/lib/kubelet/pods/51c9c5fb-55e9-4941-92ee-074bb7937135/volumes" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.815299 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d7b398-f0d2-45be-894b-f982f5216512" path="/var/lib/kubelet/pods/a8d7b398-f0d2-45be-894b-f982f5216512/volumes" Mar 12 16:27:51 crc kubenswrapper[4687]: I0312 16:27:51.820971 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.705728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"45b04c92-5e49-4aaa-937d-1bbf1339cbfb","Type":"ContainerStarted","Data":"18327f0288464f568b475d5d7179192b635525ae834892a41d4842f11c165f83"} Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.708027 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerStarted","Data":"1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec"} Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.710898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8409fe01-316a-4d7b-88a5-d8a454d3962f","Type":"ContainerStarted","Data":"b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2"} Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.710925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8409fe01-316a-4d7b-88a5-d8a454d3962f","Type":"ContainerStarted","Data":"7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222"} Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.710935 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8409fe01-316a-4d7b-88a5-d8a454d3962f","Type":"ContainerStarted","Data":"a679d91c5ade45b01623613d90973dfab9bf3ccd4ddf80198866553da72c22a1"} Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.726384 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7263477739999997 podStartE2EDuration="2.726347774s" podCreationTimestamp="2026-03-12 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:52.72472611 +0000 UTC m=+1521.688688454" watchObservedRunningTime="2026-03-12 16:27:52.726347774 +0000 UTC m=+1521.690310118" Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.750840 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.75081907 podStartE2EDuration="2.75081907s" podCreationTimestamp="2026-03-12 16:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:52.740883734 +0000 UTC m=+1521.704846078" watchObservedRunningTime="2026-03-12 16:27:52.75081907 +0000 UTC m=+1521.714781414" Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.808777 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.809760 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.813608 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 16:27:52 crc kubenswrapper[4687]: I0312 16:27:52.816197 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 16:27:53 crc kubenswrapper[4687]: I0312 16:27:53.723702 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 16:27:53 crc kubenswrapper[4687]: I0312 16:27:53.729707 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 16:27:53 crc kubenswrapper[4687]: I0312 16:27:53.981608 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9r576"] Mar 12 16:27:53 crc kubenswrapper[4687]: I0312 16:27:53.984048 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:53 crc kubenswrapper[4687]: I0312 16:27:53.997488 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9r576"] Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.086692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.088194 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.088324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.088579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.088713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz85w\" (UniqueName: \"kubernetes.io/projected/80c26f7b-2406-431b-990b-99c716d8860a-kube-api-access-vz85w\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.088852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.190218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.190485 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz85w\" (UniqueName: \"kubernetes.io/projected/80c26f7b-2406-431b-990b-99c716d8860a-kube-api-access-vz85w\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.190600 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.190713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.190857 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.190962 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.191086 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.191483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.191714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-config\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.191714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.191942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.214857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz85w\" (UniqueName: \"kubernetes.io/projected/80c26f7b-2406-431b-990b-99c716d8860a-kube-api-access-vz85w\") pod \"dnsmasq-dns-6b7bbf7cf9-9r576\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.305776 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.758049 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerStarted","Data":"7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b"} Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.822732 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.472921084 podStartE2EDuration="6.822712306s" podCreationTimestamp="2026-03-12 16:27:48 +0000 UTC" firstStartedPulling="2026-03-12 16:27:49.925454146 +0000 UTC m=+1518.889416490" lastFinishedPulling="2026-03-12 16:27:54.275245368 +0000 UTC m=+1523.239207712" observedRunningTime="2026-03-12 16:27:54.799650547 +0000 UTC m=+1523.763612881" watchObservedRunningTime="2026-03-12 16:27:54.822712306 +0000 UTC m=+1523.786674650" Mar 12 16:27:54 crc kubenswrapper[4687]: I0312 16:27:54.928894 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9r576"] Mar 12 16:27:55 crc kubenswrapper[4687]: I0312 16:27:55.788634 4687 generic.go:334] "Generic (PLEG): container finished" podID="80c26f7b-2406-431b-990b-99c716d8860a" containerID="aeb9e3c1423e7caee15e6aa96229c940ed17e3395e2315c6da8ee6458acbe250" exitCode=0 Mar 12 16:27:55 crc kubenswrapper[4687]: I0312 16:27:55.789574 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" event={"ID":"80c26f7b-2406-431b-990b-99c716d8860a","Type":"ContainerDied","Data":"aeb9e3c1423e7caee15e6aa96229c940ed17e3395e2315c6da8ee6458acbe250"} Mar 12 16:27:55 crc kubenswrapper[4687]: I0312 16:27:55.790854 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:27:55 crc kubenswrapper[4687]: I0312 16:27:55.790871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" event={"ID":"80c26f7b-2406-431b-990b-99c716d8860a","Type":"ContainerStarted","Data":"b9e5b60b5fcdfda02afdd1c06cfece11c3836d8d8a940d9c285d9981b7dff953"} Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.180766 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.202556 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.202609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.477290 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.644337 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.802400 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" event={"ID":"80c26f7b-2406-431b-990b-99c716d8860a","Type":"ContainerStarted","Data":"11c631cf9e6ef62846cc8e0be2de6e26a94e5dd39e461b06981db7ddb2f5158d"} Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.802700 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-log" containerID="cri-o://92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b" gracePeriod=30 Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.802868 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-api" containerID="cri-o://5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d" gracePeriod=30 Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.803093 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:27:56 crc kubenswrapper[4687]: I0312 16:27:56.836142 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" podStartSLOduration=3.836118142 podStartE2EDuration="3.836118142s" podCreationTimestamp="2026-03-12 16:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:27:56.827067608 +0000 UTC m=+1525.791029952" watchObservedRunningTime="2026-03-12 16:27:56.836118142 +0000 UTC m=+1525.800080486" Mar 12 16:27:57 crc kubenswrapper[4687]: I0312 16:27:57.816620 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerID="92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b" exitCode=143 Mar 12 16:27:57 crc kubenswrapper[4687]: I0312 16:27:57.816681 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8bb5c86-04c9-48c0-87f8-0230288ad725","Type":"ContainerDied","Data":"92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b"} Mar 12 16:27:57 crc kubenswrapper[4687]: I0312 16:27:57.817413 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-central-agent" containerID="cri-o://a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877" gracePeriod=30 Mar 12 16:27:57 crc kubenswrapper[4687]: I0312 16:27:57.817467 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="proxy-httpd" containerID="cri-o://7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b" gracePeriod=30 Mar 12 16:27:57 crc kubenswrapper[4687]: I0312 16:27:57.817455 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="sg-core" containerID="cri-o://1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec" gracePeriod=30 Mar 12 16:27:57 crc kubenswrapper[4687]: I0312 16:27:57.817538 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-notification-agent" containerID="cri-o://ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b" gracePeriod=30 Mar 12 16:27:58 crc kubenswrapper[4687]: I0312 16:27:58.838156 4687 generic.go:334] "Generic (PLEG): container finished" podID="a75b499f-2946-4330-a364-5ef6e733a8da" containerID="7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b" exitCode=0 Mar 12 16:27:58 crc kubenswrapper[4687]: I0312 16:27:58.838519 4687 generic.go:334] "Generic (PLEG): container finished" podID="a75b499f-2946-4330-a364-5ef6e733a8da" containerID="1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec" exitCode=2 Mar 12 16:27:58 crc kubenswrapper[4687]: I0312 16:27:58.838552 4687 generic.go:334] "Generic (PLEG): container finished" podID="a75b499f-2946-4330-a364-5ef6e733a8da" containerID="ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b" exitCode=0 Mar 12 16:27:58 crc kubenswrapper[4687]: I0312 16:27:58.838208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerDied","Data":"7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b"} Mar 12 16:27:58 crc kubenswrapper[4687]: I0312 16:27:58.838597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerDied","Data":"1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec"} Mar 12 16:27:58 crc kubenswrapper[4687]: I0312 16:27:58.838616 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerDied","Data":"ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b"} Mar 12 16:27:59 crc kubenswrapper[4687]: I0312 16:27:59.286507 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ftwv9" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="registry-server" probeResult="failure" output=< Mar 12 16:27:59 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:27:59 crc kubenswrapper[4687]: > Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.141143 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555548-cjcvk"] Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.144061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.147415 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.147484 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.147627 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.153161 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555548-cjcvk"] Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.259247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp5hd\" (UniqueName: \"kubernetes.io/projected/6932a099-9587-4b14-925a-433e077f505b-kube-api-access-mp5hd\") pod \"auto-csr-approver-29555548-cjcvk\" (UID: \"6932a099-9587-4b14-925a-433e077f505b\") " pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.362237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp5hd\" (UniqueName: \"kubernetes.io/projected/6932a099-9587-4b14-925a-433e077f505b-kube-api-access-mp5hd\") pod \"auto-csr-approver-29555548-cjcvk\" (UID: \"6932a099-9587-4b14-925a-433e077f505b\") " pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.793129 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp5hd\" (UniqueName: \"kubernetes.io/projected/6932a099-9587-4b14-925a-433e077f505b-kube-api-access-mp5hd\") pod \"auto-csr-approver-29555548-cjcvk\" (UID: \"6932a099-9587-4b14-925a-433e077f505b\") " pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.870200 4687 generic.go:334] "Generic (PLEG): container finished" podID="a75b499f-2946-4330-a364-5ef6e733a8da" containerID="a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877" exitCode=0 Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.870273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerDied","Data":"a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877"} Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.878919 4687 generic.go:334] "Generic (PLEG): container finished" podID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerID="5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d" exitCode=0 Mar 12 16:28:00 crc kubenswrapper[4687]: I0312 16:28:00.879100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8bb5c86-04c9-48c0-87f8-0230288ad725","Type":"ContainerDied","Data":"5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d"} Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.044691 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.070112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.181208 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.188991 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mz29\" (UniqueName: \"kubernetes.io/projected/a8bb5c86-04c9-48c0-87f8-0230288ad725-kube-api-access-4mz29\") pod \"a8bb5c86-04c9-48c0-87f8-0230288ad725\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.189114 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb5c86-04c9-48c0-87f8-0230288ad725-logs\") pod \"a8bb5c86-04c9-48c0-87f8-0230288ad725\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.189162 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-config-data\") pod \"a8bb5c86-04c9-48c0-87f8-0230288ad725\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.189194 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-combined-ca-bundle\") pod \"a8bb5c86-04c9-48c0-87f8-0230288ad725\" (UID: \"a8bb5c86-04c9-48c0-87f8-0230288ad725\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.191831 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bb5c86-04c9-48c0-87f8-0230288ad725-logs" (OuterVolumeSpecName: "logs") pod "a8bb5c86-04c9-48c0-87f8-0230288ad725" (UID: "a8bb5c86-04c9-48c0-87f8-0230288ad725"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.197310 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bb5c86-04c9-48c0-87f8-0230288ad725-kube-api-access-4mz29" (OuterVolumeSpecName: "kube-api-access-4mz29") pod "a8bb5c86-04c9-48c0-87f8-0230288ad725" (UID: "a8bb5c86-04c9-48c0-87f8-0230288ad725"). InnerVolumeSpecName "kube-api-access-4mz29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.203953 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.205522 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.238879 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.239911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-config-data" (OuterVolumeSpecName: "config-data") pod "a8bb5c86-04c9-48c0-87f8-0230288ad725" (UID: "a8bb5c86-04c9-48c0-87f8-0230288ad725"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.248849 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8bb5c86-04c9-48c0-87f8-0230288ad725" (UID: "a8bb5c86-04c9-48c0-87f8-0230288ad725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.291938 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mz29\" (UniqueName: \"kubernetes.io/projected/a8bb5c86-04c9-48c0-87f8-0230288ad725-kube-api-access-4mz29\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.293149 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb5c86-04c9-48c0-87f8-0230288ad725-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.293266 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.293332 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8bb5c86-04c9-48c0-87f8-0230288ad725-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.388078 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497269 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-run-httpd\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497420 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpxp7\" (UniqueName: \"kubernetes.io/projected/a75b499f-2946-4330-a364-5ef6e733a8da-kube-api-access-mpxp7\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-scripts\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-sg-core-conf-yaml\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497609 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-log-httpd\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497679 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-config-data\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.497769 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-combined-ca-bundle\") pod \"a75b499f-2946-4330-a364-5ef6e733a8da\" (UID: \"a75b499f-2946-4330-a364-5ef6e733a8da\") " Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.498341 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.498648 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.503334 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75b499f-2946-4330-a364-5ef6e733a8da-kube-api-access-mpxp7" (OuterVolumeSpecName: "kube-api-access-mpxp7") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "kube-api-access-mpxp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.521235 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-scripts" (OuterVolumeSpecName: "scripts") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.548675 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.605139 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpxp7\" (UniqueName: \"kubernetes.io/projected/a75b499f-2946-4330-a364-5ef6e733a8da-kube-api-access-mpxp7\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.605182 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.605194 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.605205 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a75b499f-2946-4330-a364-5ef6e733a8da-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.622529 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.643846 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-config-data" (OuterVolumeSpecName: "config-data") pod "a75b499f-2946-4330-a364-5ef6e733a8da" (UID: "a75b499f-2946-4330-a364-5ef6e733a8da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.647778 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555548-cjcvk"] Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.709767 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.709799 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75b499f-2946-4330-a364-5ef6e733a8da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.893066 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" event={"ID":"6932a099-9587-4b14-925a-433e077f505b","Type":"ContainerStarted","Data":"7a41399d84ae6d63071682db37430f13603e0c3d8b4ea24ad0a6fbc2debb0f7b"} Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.897274 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.897469 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a75b499f-2946-4330-a364-5ef6e733a8da","Type":"ContainerDied","Data":"534b802982a7aa946d9ea50d3e61ce1c16ca86c14d1354d2144a67f7bb700f83"} Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.897669 4687 scope.go:117] "RemoveContainer" containerID="7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.909754 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.909811 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8bb5c86-04c9-48c0-87f8-0230288ad725","Type":"ContainerDied","Data":"cba0b8e9d34688a917b43da8abde36fcfc574d102ed304ea9034848b1b42efc5"} Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.928638 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.933662 4687 scope.go:117] "RemoveContainer" containerID="1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.934551 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.960706 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.980598 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.983295 4687 scope.go:117] "RemoveContainer" containerID="ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b" Mar 12 16:28:01 crc kubenswrapper[4687]: I0312 16:28:01.996455 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.007410 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:02 crc kubenswrapper[4687]: E0312 16:28:02.007947 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="proxy-httpd" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.007960 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="proxy-httpd" Mar 12 16:28:02 crc kubenswrapper[4687]: E0312 16:28:02.007981 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-api" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.007988 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-api" Mar 12 16:28:02 crc kubenswrapper[4687]: E0312 16:28:02.008008 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-log" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008016 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-log" Mar 12 16:28:02 crc kubenswrapper[4687]: E0312 16:28:02.008027 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-central-agent" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008035 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-central-agent" Mar 12 16:28:02 crc kubenswrapper[4687]: E0312 16:28:02.008045 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-notification-agent" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008053 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-notification-agent" Mar 12 16:28:02 crc kubenswrapper[4687]: E0312 16:28:02.008076 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="sg-core" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008082 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="sg-core" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008299 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-api" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008316 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="proxy-httpd" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008324 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-central-agent" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008382 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="sg-core" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008400 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" containerName="ceilometer-notification-agent" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.008407 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" containerName="nova-api-log" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.014826 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.019725 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.020063 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.020310 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.022326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.026750 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.026924 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.028879 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.043565 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.084678 4687 scope.go:117] "RemoveContainer" containerID="a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.096987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbfns\" (UniqueName: \"kubernetes.io/projected/b10456c5-1f71-40ff-bfbb-628757cefc6a-kube-api-access-fbfns\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120195 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7n4\" (UniqueName: \"kubernetes.io/projected/f5891051-42e9-4519-a759-9305c817e4b0-kube-api-access-nt7n4\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-config-data\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-scripts\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120589 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.120815 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-log-httpd\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.121032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-run-httpd\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.122340 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10456c5-1f71-40ff-bfbb-628757cefc6a-logs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.122503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-config-data\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.135338 4687 scope.go:117] "RemoveContainer" containerID="5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.176485 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-j7cr7"] Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.178775 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.182234 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.182804 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.196152 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7cr7"] Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-log-httpd\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224089 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-run-httpd\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224117 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10456c5-1f71-40ff-bfbb-628757cefc6a-logs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-config-data\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbfns\" (UniqueName: \"kubernetes.io/projected/b10456c5-1f71-40ff-bfbb-628757cefc6a-kube-api-access-fbfns\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224283 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7n4\" (UniqueName: \"kubernetes.io/projected/f5891051-42e9-4519-a759-9305c817e4b0-kube-api-access-nt7n4\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-config-data\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224443 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-scripts\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224468 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.224542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.227977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.228666 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.228905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-log-httpd\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.229111 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-run-httpd\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.229399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10456c5-1f71-40ff-bfbb-628757cefc6a-logs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.230811 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.235729 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.235830 4687 scope.go:117] "RemoveContainer" containerID="92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.236329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-config-data\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.236787 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.236897 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.244987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-scripts\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.246906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.247435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbfns\" (UniqueName: \"kubernetes.io/projected/b10456c5-1f71-40ff-bfbb-628757cefc6a-kube-api-access-fbfns\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.250680 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-config-data\") pod \"nova-api-0\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.251063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7n4\" (UniqueName: \"kubernetes.io/projected/f5891051-42e9-4519-a759-9305c817e4b0-kube-api-access-nt7n4\") pod \"ceilometer-0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.326942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-scripts\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.327017 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.327048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-config-data\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.327087 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkcsn\" (UniqueName: \"kubernetes.io/projected/af6ae379-ebea-4caf-819d-2c566201e67a-kube-api-access-pkcsn\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.367511 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.386109 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.429949 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.430076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-config-data\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.430162 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkcsn\" (UniqueName: \"kubernetes.io/projected/af6ae379-ebea-4caf-819d-2c566201e67a-kube-api-access-pkcsn\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.431029 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-scripts\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.434937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-scripts\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.435401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.436922 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-config-data\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.449649 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkcsn\" (UniqueName: \"kubernetes.io/projected/af6ae379-ebea-4caf-819d-2c566201e67a-kube-api-access-pkcsn\") pod \"nova-cell1-cell-mapping-j7cr7\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.524987 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:02 crc kubenswrapper[4687]: I0312 16:28:02.931040 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.110772 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.288247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7cr7"] Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.748276 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a75b499f-2946-4330-a364-5ef6e733a8da" path="/var/lib/kubelet/pods/a75b499f-2946-4330-a364-5ef6e733a8da/volumes" Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.749294 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bb5c86-04c9-48c0-87f8-0230288ad725" path="/var/lib/kubelet/pods/a8bb5c86-04c9-48c0-87f8-0230288ad725/volumes" Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.954948 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b10456c5-1f71-40ff-bfbb-628757cefc6a","Type":"ContainerStarted","Data":"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.955000 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b10456c5-1f71-40ff-bfbb-628757cefc6a","Type":"ContainerStarted","Data":"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.955013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b10456c5-1f71-40ff-bfbb-628757cefc6a","Type":"ContainerStarted","Data":"00bca8e3c3692df0f5c8276feaf78c8c967755d651c2ebe2c71420cba60b471d"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.958232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerStarted","Data":"ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.958287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerStarted","Data":"ecf6b6ec10f10360e832959d3fd440000180ab302cc45432bb15d061584f22a7"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.960652 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7cr7" event={"ID":"af6ae379-ebea-4caf-819d-2c566201e67a","Type":"ContainerStarted","Data":"149578c8036321e87943445e8117d2c7e4e6d2daa770b9a61319b32f4a0324cf"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.960837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7cr7" event={"ID":"af6ae379-ebea-4caf-819d-2c566201e67a","Type":"ContainerStarted","Data":"162239a2e84cfa744854c6a5607b607a3fd8fb371d13f49a2676fe2b9c5c197c"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.961999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" event={"ID":"6932a099-9587-4b14-925a-433e077f505b","Type":"ContainerStarted","Data":"a19c1c41061ef6a2567c3aa660b9806c7cb93025766138dc288b3b3ed10a2f9f"} Mar 12 16:28:03 crc kubenswrapper[4687]: I0312 16:28:03.981972 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.981944442 podStartE2EDuration="2.981944442s" podCreationTimestamp="2026-03-12 16:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:28:03.978399666 +0000 UTC m=+1532.942362010" watchObservedRunningTime="2026-03-12 16:28:03.981944442 +0000 UTC m=+1532.945906786" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.016089 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" podStartSLOduration=2.947585331 podStartE2EDuration="4.016071008s" podCreationTimestamp="2026-03-12 16:28:00 +0000 UTC" firstStartedPulling="2026-03-12 16:28:01.649155561 +0000 UTC m=+1530.613117915" lastFinishedPulling="2026-03-12 16:28:02.717641248 +0000 UTC m=+1531.681603592" observedRunningTime="2026-03-12 16:28:04.003473729 +0000 UTC m=+1532.967436093" watchObservedRunningTime="2026-03-12 16:28:04.016071008 +0000 UTC m=+1532.980033352" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.038974 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-j7cr7" podStartSLOduration=2.038819788 podStartE2EDuration="2.038819788s" podCreationTimestamp="2026-03-12 16:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:28:04.019351155 +0000 UTC m=+1532.983313509" watchObservedRunningTime="2026-03-12 16:28:04.038819788 +0000 UTC m=+1533.002782132" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.307992 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.381283 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-p5bmc"] Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.381935 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerName="dnsmasq-dns" containerID="cri-o://8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8" gracePeriod=10 Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.921591 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.987482 4687 generic.go:334] "Generic (PLEG): container finished" podID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerID="8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8" exitCode=0 Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.987666 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.987820 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" event={"ID":"4d9aadaa-3727-48e2-b482-86ccb2f809cf","Type":"ContainerDied","Data":"8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8"} Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.987888 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-p5bmc" event={"ID":"4d9aadaa-3727-48e2-b482-86ccb2f809cf","Type":"ContainerDied","Data":"32a24639cd98ff058a4b73b0faba1ca3dfbbd36de06f6dc42e3f3dbc307976a6"} Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.987925 4687 scope.go:117] "RemoveContainer" containerID="8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8" Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.993456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerStarted","Data":"b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a"} Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.997454 4687 generic.go:334] "Generic (PLEG): container finished" podID="6932a099-9587-4b14-925a-433e077f505b" containerID="a19c1c41061ef6a2567c3aa660b9806c7cb93025766138dc288b3b3ed10a2f9f" exitCode=0 Mar 12 16:28:04 crc kubenswrapper[4687]: I0312 16:28:04.997506 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" event={"ID":"6932a099-9587-4b14-925a-433e077f505b","Type":"ContainerDied","Data":"a19c1c41061ef6a2567c3aa660b9806c7cb93025766138dc288b3b3ed10a2f9f"} Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.019920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-nb\") pod \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.020051 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-swift-storage-0\") pod \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.020087 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-svc\") pod \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.020160 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-config\") pod \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.020216 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-sb\") pod \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.020345 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm4j9\" (UniqueName: \"kubernetes.io/projected/4d9aadaa-3727-48e2-b482-86ccb2f809cf-kube-api-access-tm4j9\") pod \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\" (UID: \"4d9aadaa-3727-48e2-b482-86ccb2f809cf\") " Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.033109 4687 scope.go:117] "RemoveContainer" containerID="44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.039471 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9aadaa-3727-48e2-b482-86ccb2f809cf-kube-api-access-tm4j9" (OuterVolumeSpecName: "kube-api-access-tm4j9") pod "4d9aadaa-3727-48e2-b482-86ccb2f809cf" (UID: "4d9aadaa-3727-48e2-b482-86ccb2f809cf"). InnerVolumeSpecName "kube-api-access-tm4j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.067741 4687 scope.go:117] "RemoveContainer" containerID="8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8" Mar 12 16:28:05 crc kubenswrapper[4687]: E0312 16:28:05.070502 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8\": container with ID starting with 8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8 not found: ID does not exist" containerID="8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.070561 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8"} err="failed to get container status \"8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8\": rpc error: code = NotFound desc = could not find container \"8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8\": container with ID starting with 8a9c960d6bfa2b748fac32027d4b3cfce253b0a8cd216cfccc58552297735da8 not found: ID does not exist" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.070593 4687 scope.go:117] "RemoveContainer" containerID="44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba" Mar 12 16:28:05 crc kubenswrapper[4687]: E0312 16:28:05.070978 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba\": container with ID starting with 44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba not found: ID does not exist" containerID="44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.071019 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba"} err="failed to get container status \"44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba\": rpc error: code = NotFound desc = could not find container \"44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba\": container with ID starting with 44ab1b953ab50ca2f52cbfd3738586616d575cf3641a82fa94f6c68e1e65b2ba not found: ID does not exist" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.084976 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d9aadaa-3727-48e2-b482-86ccb2f809cf" (UID: "4d9aadaa-3727-48e2-b482-86ccb2f809cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.089326 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-config" (OuterVolumeSpecName: "config") pod "4d9aadaa-3727-48e2-b482-86ccb2f809cf" (UID: "4d9aadaa-3727-48e2-b482-86ccb2f809cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.094796 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d9aadaa-3727-48e2-b482-86ccb2f809cf" (UID: "4d9aadaa-3727-48e2-b482-86ccb2f809cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.100090 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d9aadaa-3727-48e2-b482-86ccb2f809cf" (UID: "4d9aadaa-3727-48e2-b482-86ccb2f809cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.123043 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.123072 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm4j9\" (UniqueName: \"kubernetes.io/projected/4d9aadaa-3727-48e2-b482-86ccb2f809cf-kube-api-access-tm4j9\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.123084 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.123095 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.123107 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.147267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d9aadaa-3727-48e2-b482-86ccb2f809cf" (UID: "4d9aadaa-3727-48e2-b482-86ccb2f809cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.225920 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d9aadaa-3727-48e2-b482-86ccb2f809cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.432098 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-p5bmc"] Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.443971 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-p5bmc"] Mar 12 16:28:05 crc kubenswrapper[4687]: I0312 16:28:05.755637 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" path="/var/lib/kubelet/pods/4d9aadaa-3727-48e2-b482-86ccb2f809cf/volumes" Mar 12 16:28:06 crc kubenswrapper[4687]: I0312 16:28:06.036349 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerStarted","Data":"c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe"} Mar 12 16:28:06 crc kubenswrapper[4687]: I0312 16:28:06.429345 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:06 crc kubenswrapper[4687]: I0312 16:28:06.557315 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp5hd\" (UniqueName: \"kubernetes.io/projected/6932a099-9587-4b14-925a-433e077f505b-kube-api-access-mp5hd\") pod \"6932a099-9587-4b14-925a-433e077f505b\" (UID: \"6932a099-9587-4b14-925a-433e077f505b\") " Mar 12 16:28:06 crc kubenswrapper[4687]: I0312 16:28:06.562490 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6932a099-9587-4b14-925a-433e077f505b-kube-api-access-mp5hd" (OuterVolumeSpecName: "kube-api-access-mp5hd") pod "6932a099-9587-4b14-925a-433e077f505b" (UID: "6932a099-9587-4b14-925a-433e077f505b"). InnerVolumeSpecName "kube-api-access-mp5hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:06 crc kubenswrapper[4687]: I0312 16:28:06.661210 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp5hd\" (UniqueName: \"kubernetes.io/projected/6932a099-9587-4b14-925a-433e077f505b-kube-api-access-mp5hd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.042680 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="423af688-f49e-42a8-b2d2-623cd01ac948" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.240:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.049930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" event={"ID":"6932a099-9587-4b14-925a-433e077f505b","Type":"ContainerDied","Data":"7a41399d84ae6d63071682db37430f13603e0c3d8b4ea24ad0a6fbc2debb0f7b"} Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.049977 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a41399d84ae6d63071682db37430f13603e0c3d8b4ea24ad0a6fbc2debb0f7b" Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.050033 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555548-cjcvk" Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.111748 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555542-9mtqp"] Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.122953 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555542-9mtqp"] Mar 12 16:28:07 crc kubenswrapper[4687]: I0312 16:28:07.753130 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c786e4-0569-47e5-b4d6-db2631c509a6" path="/var/lib/kubelet/pods/25c786e4-0569-47e5-b4d6-db2631c509a6/volumes" Mar 12 16:28:08 crc kubenswrapper[4687]: I0312 16:28:08.062881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerStarted","Data":"5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07"} Mar 12 16:28:08 crc kubenswrapper[4687]: I0312 16:28:08.063039 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:28:08 crc kubenswrapper[4687]: I0312 16:28:08.090526 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.64445276 podStartE2EDuration="7.090506937s" podCreationTimestamp="2026-03-12 16:28:01 +0000 UTC" firstStartedPulling="2026-03-12 16:28:02.925088297 +0000 UTC m=+1531.889050631" lastFinishedPulling="2026-03-12 16:28:07.371142464 +0000 UTC m=+1536.335104808" observedRunningTime="2026-03-12 16:28:08.083745216 +0000 UTC m=+1537.047707570" watchObservedRunningTime="2026-03-12 16:28:08.090506937 +0000 UTC m=+1537.054469281" Mar 12 16:28:08 crc kubenswrapper[4687]: E0312 16:28:08.166521 4687 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f63fdebbba1444692800058d18090c2967016fa85f13f801912a9c407703a738/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f63fdebbba1444692800058d18090c2967016fa85f13f801912a9c407703a738/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_423af688-f49e-42a8-b2d2-623cd01ac948/ceilometer-central-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_423af688-f49e-42a8-b2d2-623cd01ac948/ceilometer-central-agent/0.log: no such file or directory Mar 12 16:28:08 crc kubenswrapper[4687]: I0312 16:28:08.292833 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:28:08 crc kubenswrapper[4687]: I0312 16:28:08.354004 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:28:09 crc kubenswrapper[4687]: E0312 16:28:09.007910 4687 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c2ead52f8f9e75b69f3390ea276d3d60add8b4845d703157d0cfa86135a1b97f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c2ead52f8f9e75b69f3390ea276d3d60add8b4845d703157d0cfa86135a1b97f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_423af688-f49e-42a8-b2d2-623cd01ac948/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_423af688-f49e-42a8-b2d2-623cd01ac948/ceilometer-notification-agent/0.log: no such file or directory Mar 12 16:28:09 crc kubenswrapper[4687]: I0312 16:28:09.080524 4687 generic.go:334] "Generic (PLEG): container finished" podID="af6ae379-ebea-4caf-819d-2c566201e67a" containerID="149578c8036321e87943445e8117d2c7e4e6d2daa770b9a61319b32f4a0324cf" exitCode=0 Mar 12 16:28:09 crc kubenswrapper[4687]: I0312 16:28:09.080640 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7cr7" event={"ID":"af6ae379-ebea-4caf-819d-2c566201e67a","Type":"ContainerDied","Data":"149578c8036321e87943445e8117d2c7e4e6d2daa770b9a61319b32f4a0324cf"} Mar 12 16:28:09 crc kubenswrapper[4687]: I0312 16:28:09.110858 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftwv9"] Mar 12 16:28:09 crc kubenswrapper[4687]: E0312 16:28:09.711828 4687 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a6f62c0617f400284bfe0e15c2e6c85e3d1556a0ae4f07878f3f89f48f45719e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a6f62c0617f400284bfe0e15c2e6c85e3d1556a0ae4f07878f3f89f48f45719e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_423af688-f49e-42a8-b2d2-623cd01ac948/sg-core/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_423af688-f49e-42a8-b2d2-623cd01ac948/sg-core/0.log: no such file or directory Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.095280 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftwv9" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="registry-server" containerID="cri-o://984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379" gracePeriod=2 Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.538985 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.664845 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-combined-ca-bundle\") pod \"af6ae379-ebea-4caf-819d-2c566201e67a\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.665724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-config-data\") pod \"af6ae379-ebea-4caf-819d-2c566201e67a\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.665843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-scripts\") pod \"af6ae379-ebea-4caf-819d-2c566201e67a\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.665895 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkcsn\" (UniqueName: \"kubernetes.io/projected/af6ae379-ebea-4caf-819d-2c566201e67a-kube-api-access-pkcsn\") pod \"af6ae379-ebea-4caf-819d-2c566201e67a\" (UID: \"af6ae379-ebea-4caf-819d-2c566201e67a\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.671023 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6ae379-ebea-4caf-819d-2c566201e67a-kube-api-access-pkcsn" (OuterVolumeSpecName: "kube-api-access-pkcsn") pod "af6ae379-ebea-4caf-819d-2c566201e67a" (UID: "af6ae379-ebea-4caf-819d-2c566201e67a"). InnerVolumeSpecName "kube-api-access-pkcsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.675466 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-scripts" (OuterVolumeSpecName: "scripts") pod "af6ae379-ebea-4caf-819d-2c566201e67a" (UID: "af6ae379-ebea-4caf-819d-2c566201e67a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.708855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-config-data" (OuterVolumeSpecName: "config-data") pod "af6ae379-ebea-4caf-819d-2c566201e67a" (UID: "af6ae379-ebea-4caf-819d-2c566201e67a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.711445 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af6ae379-ebea-4caf-819d-2c566201e67a" (UID: "af6ae379-ebea-4caf-819d-2c566201e67a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.741951 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.772942 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.772982 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.773084 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6ae379-ebea-4caf-819d-2c566201e67a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.773119 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkcsn\" (UniqueName: \"kubernetes.io/projected/af6ae379-ebea-4caf-819d-2c566201e67a-kube-api-access-pkcsn\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.874373 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-catalog-content\") pod \"c11ced5e-d2bf-469d-ac52-b8e99c304267\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.874681 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9qr2\" (UniqueName: \"kubernetes.io/projected/c11ced5e-d2bf-469d-ac52-b8e99c304267-kube-api-access-d9qr2\") pod \"c11ced5e-d2bf-469d-ac52-b8e99c304267\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.874740 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-utilities\") pod \"c11ced5e-d2bf-469d-ac52-b8e99c304267\" (UID: \"c11ced5e-d2bf-469d-ac52-b8e99c304267\") " Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.877488 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-utilities" (OuterVolumeSpecName: "utilities") pod "c11ced5e-d2bf-469d-ac52-b8e99c304267" (UID: "c11ced5e-d2bf-469d-ac52-b8e99c304267"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.879907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11ced5e-d2bf-469d-ac52-b8e99c304267-kube-api-access-d9qr2" (OuterVolumeSpecName: "kube-api-access-d9qr2") pod "c11ced5e-d2bf-469d-ac52-b8e99c304267" (UID: "c11ced5e-d2bf-469d-ac52-b8e99c304267"). InnerVolumeSpecName "kube-api-access-d9qr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.924950 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c11ced5e-d2bf-469d-ac52-b8e99c304267" (UID: "c11ced5e-d2bf-469d-ac52-b8e99c304267"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.977921 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.977959 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9qr2\" (UniqueName: \"kubernetes.io/projected/c11ced5e-d2bf-469d-ac52-b8e99c304267-kube-api-access-d9qr2\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:10 crc kubenswrapper[4687]: I0312 16:28:10.977983 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c11ced5e-d2bf-469d-ac52-b8e99c304267-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.109301 4687 generic.go:334] "Generic (PLEG): container finished" podID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerID="984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379" exitCode=0 Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.109391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerDied","Data":"984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379"} Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.109424 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftwv9" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.109451 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftwv9" event={"ID":"c11ced5e-d2bf-469d-ac52-b8e99c304267","Type":"ContainerDied","Data":"a199f93379e51963349f65b720864077bf4fe78c9bf82349ee2db943fe5be78e"} Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.109476 4687 scope.go:117] "RemoveContainer" containerID="984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.113933 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7cr7" event={"ID":"af6ae379-ebea-4caf-819d-2c566201e67a","Type":"ContainerDied","Data":"162239a2e84cfa744854c6a5607b607a3fd8fb371d13f49a2676fe2b9c5c197c"} Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.113969 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162239a2e84cfa744854c6a5607b607a3fd8fb371d13f49a2676fe2b9c5c197c" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.114028 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7cr7" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.144498 4687 scope.go:117] "RemoveContainer" containerID="dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.166542 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftwv9"] Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.172329 4687 scope.go:117] "RemoveContainer" containerID="d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.176872 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftwv9"] Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.196118 4687 scope.go:117] "RemoveContainer" containerID="984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379" Mar 12 16:28:11 crc kubenswrapper[4687]: E0312 16:28:11.196453 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379\": container with ID starting with 984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379 not found: ID does not exist" containerID="984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.196484 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379"} err="failed to get container status \"984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379\": rpc error: code = NotFound desc = could not find container \"984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379\": container with ID starting with 984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379 not found: ID does not exist" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.196512 4687 scope.go:117] "RemoveContainer" containerID="dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4" Mar 12 16:28:11 crc kubenswrapper[4687]: E0312 16:28:11.196789 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4\": container with ID starting with dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4 not found: ID does not exist" containerID="dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.196836 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4"} err="failed to get container status \"dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4\": rpc error: code = NotFound desc = could not find container \"dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4\": container with ID starting with dc959ca80663b0a2c64fa3f947b1af0a26cc71130e51e243dcb12ef8141896f4 not found: ID does not exist" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.196872 4687 scope.go:117] "RemoveContainer" containerID="d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32" Mar 12 16:28:11 crc kubenswrapper[4687]: E0312 16:28:11.197148 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32\": container with ID starting with d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32 not found: ID does not exist" containerID="d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.197172 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32"} err="failed to get container status \"d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32\": rpc error: code = NotFound desc = could not find container \"d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32\": container with ID starting with d911af394548d99f1d550d28bb21b031c2c83ac482277362d8973544ce236a32 not found: ID does not exist" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.220162 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.220453 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.226982 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.248384 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.339650 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.339881 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1261bb11-6314-4f83-98cd-e7d7abaf2d6c" containerName="nova-scheduler-scheduler" containerID="cri-o://87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b" gracePeriod=30 Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.360290 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.360511 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-log" containerID="cri-o://65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445" gracePeriod=30 Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.360923 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-api" containerID="cri-o://327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7" gracePeriod=30 Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.389425 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.491835 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-534b802982a7aa946d9ea50d3e61ce1c16ca86c14d1354d2144a67f7bb700f83": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-534b802982a7aa946d9ea50d3e61ce1c16ca86c14d1354d2144a67f7bb700f83: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.492075 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.492131 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-a44bbf553ce0744b04a4310793c1a224d441bea21dac98dbaeb3c6b3a9538877.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.497603 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc11ced5e_d2bf_469d_ac52_b8e99c304267.slice/crio-984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379.scope WatchSource:0}: Error finding container 984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379: Status 404 returned error can't find the container with id 984b5de0b44189afcfdc17679c779dc83b90d57cb5b8a2a2d010e7f4fb3a2379 Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.497811 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.497844 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-ef18a60c14bee6657c7611e2107d9e6b4c5f712f781626ad6316f2db9025339b.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.497870 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.497887 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-1e5865694a9e47893cce6050828c7304c378c5559898df6b25d77d0fd71362ec.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.502582 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-conmon-7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.502614 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75b499f_2946_4330_a364_5ef6e733a8da.slice/crio-7a6a20a935d7540bd26af66691c5645f903d21ce4282a524298e25c662e86a1b.scope: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.504253 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6932a099_9587_4b14_925a_433e077f505b.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6932a099_9587_4b14_925a_433e077f505b.slice: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: W0312 16:28:11.537733 4687 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6ae379_ebea_4caf_819d_2c566201e67a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6ae379_ebea_4caf_819d_2c566201e67a.slice: no such file or directory Mar 12 16:28:11 crc kubenswrapper[4687]: E0312 16:28:11.627013 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-conmon-e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-conmon-5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-conmon-736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-conmon-92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-cba0b8e9d34688a917b43da8abde36fcfc574d102ed304ea9034848b1b42efc5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-a9a549c5a53e7bf20aa6668ff704a1cf31d188e8ee6ea16fd156c0a2b9f31342\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:28:11 crc kubenswrapper[4687]: E0312 16:28:11.627115 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-conmon-e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-conmon-92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-a9a549c5a53e7bf20aa6668ff704a1cf31d188e8ee6ea16fd156c0a2b9f31342\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-conmon-736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-ad38f8bdba94ff33fec3a1d8b1be7bd6007bd327dadd7c26743decabc87d7da4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-d7ba665bd73a9f382cae2cf00a60e97cf515d10bde1d252b5a9daa2e5fe774be\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-cba0b8e9d34688a917b43da8abde36fcfc574d102ed304ea9034848b1b42efc5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-conmon-5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:28:11 crc kubenswrapper[4687]: E0312 16:28:11.627246 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-a9a549c5a53e7bf20aa6668ff704a1cf31d188e8ee6ea16fd156c0a2b9f31342\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-3a41f78013f674a8ab515fd508e1d982db4723cdc0b231e8eb27e6e640856288\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-conmon-736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-conmon-92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-95bd863e1efff19b934f68de5b8b7217b30a882369b4037c3514ae41b51acd22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-7bcca95cad4127b4c651da280778e020c202c4dd364ea02f949f4eed98d731ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8d7b398_f0d2_45be_894b_f982f5216512.slice/crio-736339cabb7ec14992958e0bee3503dbcfe112e50379f4ae2719fd24a2dc43ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-92515cffcc6331b5978faa4b7294458a09e8dbb26daff191a992c38a69d4828b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423af688_f49e_42a8_b2d2_623cd01ac948.slice/crio-conmon-04b3a0b935365baabf1324af504017d5e9cc3985a7188434535dc6f31ccdb38e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-cba0b8e9d34688a917b43da8abde36fcfc574d102ed304ea9034848b1b42efc5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-d7ba665bd73a9f382cae2cf00a60e97cf515d10bde1d252b5a9daa2e5fe774be\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-conmon-5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c9c5fb_55e9_4941_92ee_074bb7937135.slice/crio-conmon-e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bb5c86_04c9_48c0_87f8_0230288ad725.slice/crio-5aefa1449ec095a81301337ec487c2996ef431b91dc87a3da7c08f696bb4111d.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:28:11 crc kubenswrapper[4687]: I0312 16:28:11.752150 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" path="/var/lib/kubelet/pods/c11ced5e-d2bf-469d-ac52-b8e99c304267/volumes" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.089345 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.165979 4687 generic.go:334] "Generic (PLEG): container finished" podID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerID="327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7" exitCode=0 Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.166012 4687 generic.go:334] "Generic (PLEG): container finished" podID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerID="65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445" exitCode=143 Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.166059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b10456c5-1f71-40ff-bfbb-628757cefc6a","Type":"ContainerDied","Data":"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7"} Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.166086 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b10456c5-1f71-40ff-bfbb-628757cefc6a","Type":"ContainerDied","Data":"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445"} Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.166096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b10456c5-1f71-40ff-bfbb-628757cefc6a","Type":"ContainerDied","Data":"00bca8e3c3692df0f5c8276feaf78c8c967755d651c2ebe2c71420cba60b471d"} Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.166117 4687 scope.go:117] "RemoveContainer" containerID="327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.166346 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.172976 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerID="a297d8c22eaafc3e99151538d12ee8c1bf348f62cec4da2c101391c630be57cc" exitCode=137 Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.173038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerDied","Data":"a297d8c22eaafc3e99151538d12ee8c1bf348f62cec4da2c101391c630be57cc"} Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.198906 4687 scope.go:117] "RemoveContainer" containerID="65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.213665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-public-tls-certs\") pod \"b10456c5-1f71-40ff-bfbb-628757cefc6a\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.214705 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-config-data\") pod \"b10456c5-1f71-40ff-bfbb-628757cefc6a\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.214881 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbfns\" (UniqueName: \"kubernetes.io/projected/b10456c5-1f71-40ff-bfbb-628757cefc6a-kube-api-access-fbfns\") pod \"b10456c5-1f71-40ff-bfbb-628757cefc6a\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.215141 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-combined-ca-bundle\") pod \"b10456c5-1f71-40ff-bfbb-628757cefc6a\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.215510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-internal-tls-certs\") pod \"b10456c5-1f71-40ff-bfbb-628757cefc6a\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.215752 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10456c5-1f71-40ff-bfbb-628757cefc6a-logs\") pod \"b10456c5-1f71-40ff-bfbb-628757cefc6a\" (UID: \"b10456c5-1f71-40ff-bfbb-628757cefc6a\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.224803 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b10456c5-1f71-40ff-bfbb-628757cefc6a-logs" (OuterVolumeSpecName: "logs") pod "b10456c5-1f71-40ff-bfbb-628757cefc6a" (UID: "b10456c5-1f71-40ff-bfbb-628757cefc6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.226983 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10456c5-1f71-40ff-bfbb-628757cefc6a-kube-api-access-fbfns" (OuterVolumeSpecName: "kube-api-access-fbfns") pod "b10456c5-1f71-40ff-bfbb-628757cefc6a" (UID: "b10456c5-1f71-40ff-bfbb-628757cefc6a"). InnerVolumeSpecName "kube-api-access-fbfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.243740 4687 scope.go:117] "RemoveContainer" containerID="327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.250292 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7\": container with ID starting with 327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7 not found: ID does not exist" containerID="327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.250340 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7"} err="failed to get container status \"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7\": rpc error: code = NotFound desc = could not find container \"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7\": container with ID starting with 327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7 not found: ID does not exist" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.250396 4687 scope.go:117] "RemoveContainer" containerID="65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.251623 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445\": container with ID starting with 65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445 not found: ID does not exist" containerID="65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.252097 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445"} err="failed to get container status \"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445\": rpc error: code = NotFound desc = could not find container \"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445\": container with ID starting with 65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445 not found: ID does not exist" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.252118 4687 scope.go:117] "RemoveContainer" containerID="327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.256807 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7"} err="failed to get container status \"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7\": rpc error: code = NotFound desc = could not find container \"327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7\": container with ID starting with 327e6a69bad7c3fc9a0282e4e0ca437a02e3155dff1d4d92d3c37d3b847e66d7 not found: ID does not exist" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.256856 4687 scope.go:117] "RemoveContainer" containerID="65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.257770 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445"} err="failed to get container status \"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445\": rpc error: code = NotFound desc = could not find container \"65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445\": container with ID starting with 65880d4897334ad7314ea653fadc97a25bcd6aada6d83957878faf7313b07445 not found: ID does not exist" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.306139 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-config-data" (OuterVolumeSpecName: "config-data") pod "b10456c5-1f71-40ff-bfbb-628757cefc6a" (UID: "b10456c5-1f71-40ff-bfbb-628757cefc6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.310163 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b10456c5-1f71-40ff-bfbb-628757cefc6a" (UID: "b10456c5-1f71-40ff-bfbb-628757cefc6a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.320710 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b10456c5-1f71-40ff-bfbb-628757cefc6a" (UID: "b10456c5-1f71-40ff-bfbb-628757cefc6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.329292 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b10456c5-1f71-40ff-bfbb-628757cefc6a-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.329472 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.329550 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.329610 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbfns\" (UniqueName: \"kubernetes.io/projected/b10456c5-1f71-40ff-bfbb-628757cefc6a-kube-api-access-fbfns\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.330341 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.383727 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b10456c5-1f71-40ff-bfbb-628757cefc6a" (UID: "b10456c5-1f71-40ff-bfbb-628757cefc6a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.433347 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b10456c5-1f71-40ff-bfbb-628757cefc6a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.583431 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.617076 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.633426 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.661665 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662180 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="extract-utilities" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662198 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="extract-utilities" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662207 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6ae379-ebea-4caf-819d-2c566201e67a" containerName="nova-manage" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662215 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6ae379-ebea-4caf-819d-2c566201e67a" containerName="nova-manage" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662224 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-notifier" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662230 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-notifier" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662245 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerName="init" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662250 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerName="init" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662263 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerName="dnsmasq-dns" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662269 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerName="dnsmasq-dns" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662282 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-api" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662289 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-api" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662308 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-api" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662314 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-api" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662324 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6932a099-9587-4b14-925a-433e077f505b" containerName="oc" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662329 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6932a099-9587-4b14-925a-433e077f505b" containerName="oc" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662340 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-listener" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662346 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-listener" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662358 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-log" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662364 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-log" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662405 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="extract-content" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662412 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="extract-content" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662422 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-evaluator" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662428 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-evaluator" Mar 12 16:28:12 crc kubenswrapper[4687]: E0312 16:28:12.662437 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="registry-server" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662443 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="registry-server" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662640 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11ced5e-d2bf-469d-ac52-b8e99c304267" containerName="registry-server" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662650 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6932a099-9587-4b14-925a-433e077f505b" containerName="oc" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662663 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-listener" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662679 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-log" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662692 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9aadaa-3727-48e2-b482-86ccb2f809cf" containerName="dnsmasq-dns" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662701 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" containerName="nova-api-api" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662709 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-notifier" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662718 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-evaluator" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662727 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6ae379-ebea-4caf-819d-2c566201e67a" containerName="nova-manage" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.662739 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" containerName="aodh-api" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.663913 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.666940 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.667082 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.667179 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.705332 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.738839 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.740027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-config-data\") pod \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.740134 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle\") pod \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.740314 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4nvk\" (UniqueName: \"kubernetes.io/projected/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-kube-api-access-w4nvk\") pod \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.740444 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-scripts\") pod \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.741081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfw6\" (UniqueName: \"kubernetes.io/projected/ad69d278-88f9-4542-9673-664b522fd89c-kube-api-access-5xfw6\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.741167 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.741198 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.741301 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-config-data\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.741317 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.741407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad69d278-88f9-4542-9673-664b522fd89c-logs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.768187 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-scripts" (OuterVolumeSpecName: "scripts") pod "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" (UID: "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.769731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-kube-api-access-w4nvk" (OuterVolumeSpecName: "kube-api-access-w4nvk") pod "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" (UID: "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6"). InnerVolumeSpecName "kube-api-access-w4nvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.842889 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-combined-ca-bundle\") pod \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.843505 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-config-data\") pod \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.843911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bckbq\" (UniqueName: \"kubernetes.io/projected/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-kube-api-access-bckbq\") pod \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\" (UID: \"1261bb11-6314-4f83-98cd-e7d7abaf2d6c\") " Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.846595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xfw6\" (UniqueName: \"kubernetes.io/projected/ad69d278-88f9-4542-9673-664b522fd89c-kube-api-access-5xfw6\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.846708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.847790 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.848141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-config-data\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.848163 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.848414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad69d278-88f9-4542-9673-664b522fd89c-logs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.848638 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4nvk\" (UniqueName: \"kubernetes.io/projected/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-kube-api-access-w4nvk\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.848658 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.851220 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad69d278-88f9-4542-9673-664b522fd89c-logs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.852239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.853189 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-kube-api-access-bckbq" (OuterVolumeSpecName: "kube-api-access-bckbq") pod "1261bb11-6314-4f83-98cd-e7d7abaf2d6c" (UID: "1261bb11-6314-4f83-98cd-e7d7abaf2d6c"). InnerVolumeSpecName "kube-api-access-bckbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.855126 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.860269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.864414 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xfw6\" (UniqueName: \"kubernetes.io/projected/ad69d278-88f9-4542-9673-664b522fd89c-kube-api-access-5xfw6\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.865773 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad69d278-88f9-4542-9673-664b522fd89c-config-data\") pod \"nova-api-0\" (UID: \"ad69d278-88f9-4542-9673-664b522fd89c\") " pod="openstack/nova-api-0" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.878067 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1261bb11-6314-4f83-98cd-e7d7abaf2d6c" (UID: "1261bb11-6314-4f83-98cd-e7d7abaf2d6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.889848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-config-data" (OuterVolumeSpecName: "config-data") pod "1261bb11-6314-4f83-98cd-e7d7abaf2d6c" (UID: "1261bb11-6314-4f83-98cd-e7d7abaf2d6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.906879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-config-data" (OuterVolumeSpecName: "config-data") pod "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" (UID: "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.950822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" (UID: "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.951225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle\") pod \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\" (UID: \"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6\") " Mar 12 16:28:12 crc kubenswrapper[4687]: W0312 16:28:12.951402 4687 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6/volumes/kubernetes.io~secret/combined-ca-bundle Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.951446 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" (UID: "d5438880-3f2b-45ec-91ec-5f43b5aa0fe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.952171 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.952194 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.952204 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.952214 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.952223 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bckbq\" (UniqueName: \"kubernetes.io/projected/1261bb11-6314-4f83-98cd-e7d7abaf2d6c-kube-api-access-bckbq\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:12 crc kubenswrapper[4687]: I0312 16:28:12.987456 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.191651 4687 generic.go:334] "Generic (PLEG): container finished" podID="1261bb11-6314-4f83-98cd-e7d7abaf2d6c" containerID="87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b" exitCode=0 Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.191940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1261bb11-6314-4f83-98cd-e7d7abaf2d6c","Type":"ContainerDied","Data":"87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b"} Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.191974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1261bb11-6314-4f83-98cd-e7d7abaf2d6c","Type":"ContainerDied","Data":"4d77099dc3fe5a12182eb6e9075289621d39f9a6817021d47bca7f3d9c76e1ed"} Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.191993 4687 scope.go:117] "RemoveContainer" containerID="87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.191892 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.202370 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-log" containerID="cri-o://7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222" gracePeriod=30 Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.202774 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.202803 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d5438880-3f2b-45ec-91ec-5f43b5aa0fe6","Type":"ContainerDied","Data":"83a5358925be1c8290279db0b8c8bbf2cd2fcce4c764f5498a61bd9108b0cc87"} Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.202846 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-metadata" containerID="cri-o://b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2" gracePeriod=30 Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.223498 4687 scope.go:117] "RemoveContainer" containerID="87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b" Mar 12 16:28:13 crc kubenswrapper[4687]: E0312 16:28:13.223867 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b\": container with ID starting with 87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b not found: ID does not exist" containerID="87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.223898 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b"} err="failed to get container status \"87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b\": rpc error: code = NotFound desc = could not find container \"87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b\": container with ID starting with 87547ea77511b3fbcc4e5cf69144c8f887a857fca8c99afa6eb90db35bc8502b not found: ID does not exist" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.223919 4687 scope.go:117] "RemoveContainer" containerID="a297d8c22eaafc3e99151538d12ee8c1bf348f62cec4da2c101391c630be57cc" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.410065 4687 scope.go:117] "RemoveContainer" containerID="5e9f2e79c1f78bfa2687a77bb2dc35258aa7a560d537f01f6600b8c4cfc628ef" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.418055 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.433476 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.449183 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: E0312 16:28:13.450513 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1261bb11-6314-4f83-98cd-e7d7abaf2d6c" containerName="nova-scheduler-scheduler" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.450545 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1261bb11-6314-4f83-98cd-e7d7abaf2d6c" containerName="nova-scheduler-scheduler" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.450778 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1261bb11-6314-4f83-98cd-e7d7abaf2d6c" containerName="nova-scheduler-scheduler" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.452009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.457909 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.458529 4687 scope.go:117] "RemoveContainer" containerID="390082cf03e25e5144cb0f4cc754b6cd39183f0c51d7f19ec68fd974e91c2371" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.465315 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf5ecb2-7178-4f78-8942-f75d813da22a-config-data\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.465431 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf5ecb2-7178-4f78-8942-f75d813da22a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.465488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f6w9\" (UniqueName: \"kubernetes.io/projected/1bf5ecb2-7178-4f78-8942-f75d813da22a-kube-api-access-5f6w9\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.479911 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.498625 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.519631 4687 scope.go:117] "RemoveContainer" containerID="b9aaaa533562a6e0809a958d5136ae4d9da5dcf9d8363fed1113325a046c15d4" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.561953 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.585516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf5ecb2-7178-4f78-8942-f75d813da22a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.585716 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f6w9\" (UniqueName: \"kubernetes.io/projected/1bf5ecb2-7178-4f78-8942-f75d813da22a-kube-api-access-5f6w9\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.586029 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf5ecb2-7178-4f78-8942-f75d813da22a-config-data\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.609729 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf5ecb2-7178-4f78-8942-f75d813da22a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.610443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf5ecb2-7178-4f78-8942-f75d813da22a-config-data\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.612934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f6w9\" (UniqueName: \"kubernetes.io/projected/1bf5ecb2-7178-4f78-8942-f75d813da22a-kube-api-access-5f6w9\") pod \"nova-scheduler-0\" (UID: \"1bf5ecb2-7178-4f78-8942-f75d813da22a\") " pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.613002 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.629242 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.631135 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.633050 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.633277 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-w4khk" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.633505 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.633806 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.633928 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.663714 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.687908 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-internal-tls-certs\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.687985 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-combined-ca-bundle\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.688115 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnn2b\" (UniqueName: \"kubernetes.io/projected/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-kube-api-access-bnn2b\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.688469 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-scripts\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.688653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-config-data\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.688910 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-public-tls-certs\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.760100 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1261bb11-6314-4f83-98cd-e7d7abaf2d6c" path="/var/lib/kubelet/pods/1261bb11-6314-4f83-98cd-e7d7abaf2d6c/volumes" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.761279 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10456c5-1f71-40ff-bfbb-628757cefc6a" path="/var/lib/kubelet/pods/b10456c5-1f71-40ff-bfbb-628757cefc6a/volumes" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.761917 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5438880-3f2b-45ec-91ec-5f43b5aa0fe6" path="/var/lib/kubelet/pods/d5438880-3f2b-45ec-91ec-5f43b5aa0fe6/volumes" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.777852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.790834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-public-tls-certs\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.790903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-internal-tls-certs\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.790991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-combined-ca-bundle\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.791032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnn2b\" (UniqueName: \"kubernetes.io/projected/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-kube-api-access-bnn2b\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.791150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-scripts\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.791210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-config-data\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.797995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-scripts\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.798576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-combined-ca-bundle\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.798931 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-internal-tls-certs\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.799030 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-public-tls-certs\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.799883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-config-data\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:13 crc kubenswrapper[4687]: I0312 16:28:13.810941 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnn2b\" (UniqueName: \"kubernetes.io/projected/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-kube-api-access-bnn2b\") pod \"aodh-0\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " pod="openstack/aodh-0" Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.030544 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.122037 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.122097 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.122141 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.123096 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.123158 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" gracePeriod=600 Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.270954 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad69d278-88f9-4542-9673-664b522fd89c","Type":"ContainerStarted","Data":"cff7fa34ef41e779e80f7b8d9bec034791499d3a07a6711a01b6037f87316c2b"} Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.271012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad69d278-88f9-4542-9673-664b522fd89c","Type":"ContainerStarted","Data":"a86c41391ec8f1f0465166117e25a02913da9fd273b4407f21df994a9b801174"} Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.271025 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad69d278-88f9-4542-9673-664b522fd89c","Type":"ContainerStarted","Data":"7fc62135866e26a571bef7f085cd9ca7260c7479f8eba19e209e575e9c119959"} Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.275938 4687 generic.go:334] "Generic (PLEG): container finished" podID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerID="7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222" exitCode=143 Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.275969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8409fe01-316a-4d7b-88a5-d8a454d3962f","Type":"ContainerDied","Data":"7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222"} Mar 12 16:28:14 crc kubenswrapper[4687]: E0312 16:28:14.279836 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.287555 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.316363 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.316340138 podStartE2EDuration="2.316340138s" podCreationTimestamp="2026-03-12 16:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:28:14.288990974 +0000 UTC m=+1543.252953318" watchObservedRunningTime="2026-03-12 16:28:14.316340138 +0000 UTC m=+1543.280302482" Mar 12 16:28:14 crc kubenswrapper[4687]: W0312 16:28:14.667439 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod674a82b3_79e5_4e5f_9e0e_0fa47dc24b89.slice/crio-4fe859d825e13fc90e25ce154b2df3da083c27c98fe835fb8676eab88e885362 WatchSource:0}: Error finding container 4fe859d825e13fc90e25ce154b2df3da083c27c98fe835fb8676eab88e885362: Status 404 returned error can't find the container with id 4fe859d825e13fc90e25ce154b2df3da083c27c98fe835fb8676eab88e885362 Mar 12 16:28:14 crc kubenswrapper[4687]: I0312 16:28:14.669745 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.294167 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" exitCode=0 Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.294221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4"} Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.294613 4687 scope.go:117] "RemoveContainer" containerID="a6a61ada4aa1cfe2e90a3b1ef14049216d76e56f45c1ef544713761e4d6a8f41" Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.295442 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:28:15 crc kubenswrapper[4687]: E0312 16:28:15.295975 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.298162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerStarted","Data":"4fe859d825e13fc90e25ce154b2df3da083c27c98fe835fb8676eab88e885362"} Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.301598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bf5ecb2-7178-4f78-8942-f75d813da22a","Type":"ContainerStarted","Data":"ec6bd62c0826f925fb737f74faf21e1cf3fd36cf23e02a4e72f25f75c6818ec1"} Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.301653 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bf5ecb2-7178-4f78-8942-f75d813da22a","Type":"ContainerStarted","Data":"9900a2e73f06f0c43aa58880304fb7452456f24b28259cd92543225e501a58e1"} Mar 12 16:28:15 crc kubenswrapper[4687]: I0312 16:28:15.353983 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.353957857 podStartE2EDuration="2.353957857s" podCreationTimestamp="2026-03-12 16:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:28:15.339144039 +0000 UTC m=+1544.303106393" watchObservedRunningTime="2026-03-12 16:28:15.353957857 +0000 UTC m=+1544.317920221" Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.317176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerStarted","Data":"52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5"} Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.317894 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerStarted","Data":"b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf"} Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.384005 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": read tcp 10.217.0.2:36242->10.217.1.3:8775: read: connection reset by peer" Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.386190 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": read tcp 10.217.0.2:36232->10.217.1.3:8775: read: connection reset by peer" Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.874569 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.981486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twdjl\" (UniqueName: \"kubernetes.io/projected/8409fe01-316a-4d7b-88a5-d8a454d3962f-kube-api-access-twdjl\") pod \"8409fe01-316a-4d7b-88a5-d8a454d3962f\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.981614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8409fe01-316a-4d7b-88a5-d8a454d3962f-logs\") pod \"8409fe01-316a-4d7b-88a5-d8a454d3962f\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.981719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-nova-metadata-tls-certs\") pod \"8409fe01-316a-4d7b-88a5-d8a454d3962f\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.981780 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-combined-ca-bundle\") pod \"8409fe01-316a-4d7b-88a5-d8a454d3962f\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.981856 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-config-data\") pod \"8409fe01-316a-4d7b-88a5-d8a454d3962f\" (UID: \"8409fe01-316a-4d7b-88a5-d8a454d3962f\") " Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.982115 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8409fe01-316a-4d7b-88a5-d8a454d3962f-logs" (OuterVolumeSpecName: "logs") pod "8409fe01-316a-4d7b-88a5-d8a454d3962f" (UID: "8409fe01-316a-4d7b-88a5-d8a454d3962f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.982387 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8409fe01-316a-4d7b-88a5-d8a454d3962f-logs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:16 crc kubenswrapper[4687]: I0312 16:28:16.990824 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8409fe01-316a-4d7b-88a5-d8a454d3962f-kube-api-access-twdjl" (OuterVolumeSpecName: "kube-api-access-twdjl") pod "8409fe01-316a-4d7b-88a5-d8a454d3962f" (UID: "8409fe01-316a-4d7b-88a5-d8a454d3962f"). InnerVolumeSpecName "kube-api-access-twdjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.015965 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-config-data" (OuterVolumeSpecName: "config-data") pod "8409fe01-316a-4d7b-88a5-d8a454d3962f" (UID: "8409fe01-316a-4d7b-88a5-d8a454d3962f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.017278 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8409fe01-316a-4d7b-88a5-d8a454d3962f" (UID: "8409fe01-316a-4d7b-88a5-d8a454d3962f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.049173 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8409fe01-316a-4d7b-88a5-d8a454d3962f" (UID: "8409fe01-316a-4d7b-88a5-d8a454d3962f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.084585 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twdjl\" (UniqueName: \"kubernetes.io/projected/8409fe01-316a-4d7b-88a5-d8a454d3962f-kube-api-access-twdjl\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.085020 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.085080 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.085137 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8409fe01-316a-4d7b-88a5-d8a454d3962f-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.335011 4687 generic.go:334] "Generic (PLEG): container finished" podID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerID="b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2" exitCode=0 Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.335096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8409fe01-316a-4d7b-88a5-d8a454d3962f","Type":"ContainerDied","Data":"b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2"} Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.335130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8409fe01-316a-4d7b-88a5-d8a454d3962f","Type":"ContainerDied","Data":"a679d91c5ade45b01623613d90973dfab9bf3ccd4ddf80198866553da72c22a1"} Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.335150 4687 scope.go:117] "RemoveContainer" containerID="b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.335299 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.340656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerStarted","Data":"f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81"} Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.383338 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.405081 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.405505 4687 scope.go:117] "RemoveContainer" containerID="7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.423510 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:28:17 crc kubenswrapper[4687]: E0312 16:28:17.424373 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-log" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.424398 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-log" Mar 12 16:28:17 crc kubenswrapper[4687]: E0312 16:28:17.424467 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-metadata" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.424477 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-metadata" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.424886 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-log" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.424940 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" containerName="nova-metadata-metadata" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.431565 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.435915 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.436116 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.462756 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.466656 4687 scope.go:117] "RemoveContainer" containerID="b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2" Mar 12 16:28:17 crc kubenswrapper[4687]: E0312 16:28:17.467218 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2\": container with ID starting with b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2 not found: ID does not exist" containerID="b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.467241 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2"} err="failed to get container status \"b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2\": rpc error: code = NotFound desc = could not find container \"b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2\": container with ID starting with b925dc157fb670487c7a18875d7a886e1f9748e0699065be21aca6c8da958ec2 not found: ID does not exist" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.467290 4687 scope.go:117] "RemoveContainer" containerID="7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222" Mar 12 16:28:17 crc kubenswrapper[4687]: E0312 16:28:17.467668 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222\": container with ID starting with 7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222 not found: ID does not exist" containerID="7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.467685 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222"} err="failed to get container status \"7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222\": rpc error: code = NotFound desc = could not find container \"7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222\": container with ID starting with 7b854e7119a355c4e704d6bc9491d4a615e1efa09659954d04ac4bf322487222 not found: ID does not exist" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.599783 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.599873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.599895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk652\" (UniqueName: \"kubernetes.io/projected/7d138177-7712-4706-8e2c-db53f8914cca-kube-api-access-dk652\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.600041 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-config-data\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.600475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d138177-7712-4706-8e2c-db53f8914cca-logs\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.702687 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.702793 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.702810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk652\" (UniqueName: \"kubernetes.io/projected/7d138177-7712-4706-8e2c-db53f8914cca-kube-api-access-dk652\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.702855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-config-data\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.702952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d138177-7712-4706-8e2c-db53f8914cca-logs\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.703702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d138177-7712-4706-8e2c-db53f8914cca-logs\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.707164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.707761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.716289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d138177-7712-4706-8e2c-db53f8914cca-config-data\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.722546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk652\" (UniqueName: \"kubernetes.io/projected/7d138177-7712-4706-8e2c-db53f8914cca-kube-api-access-dk652\") pod \"nova-metadata-0\" (UID: \"7d138177-7712-4706-8e2c-db53f8914cca\") " pod="openstack/nova-metadata-0" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.745969 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8409fe01-316a-4d7b-88a5-d8a454d3962f" path="/var/lib/kubelet/pods/8409fe01-316a-4d7b-88a5-d8a454d3962f/volumes" Mar 12 16:28:17 crc kubenswrapper[4687]: I0312 16:28:17.871475 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 16:28:18 crc kubenswrapper[4687]: I0312 16:28:18.376563 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 16:28:18 crc kubenswrapper[4687]: I0312 16:28:18.384137 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerStarted","Data":"985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b"} Mar 12 16:28:18 crc kubenswrapper[4687]: W0312 16:28:18.388130 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d138177_7712_4706_8e2c_db53f8914cca.slice/crio-199b09edb67eedad4c3a9b4e0bb319ee851266bae13478748c80927baee56a00 WatchSource:0}: Error finding container 199b09edb67eedad4c3a9b4e0bb319ee851266bae13478748c80927baee56a00: Status 404 returned error can't find the container with id 199b09edb67eedad4c3a9b4e0bb319ee851266bae13478748c80927baee56a00 Mar 12 16:28:18 crc kubenswrapper[4687]: I0312 16:28:18.406709 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.763128912 podStartE2EDuration="5.406690935s" podCreationTimestamp="2026-03-12 16:28:13 +0000 UTC" firstStartedPulling="2026-03-12 16:28:14.670652822 +0000 UTC m=+1543.634615166" lastFinishedPulling="2026-03-12 16:28:17.314214845 +0000 UTC m=+1546.278177189" observedRunningTime="2026-03-12 16:28:18.40127458 +0000 UTC m=+1547.365236934" watchObservedRunningTime="2026-03-12 16:28:18.406690935 +0000 UTC m=+1547.370653279" Mar 12 16:28:18 crc kubenswrapper[4687]: I0312 16:28:18.778098 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 16:28:19 crc kubenswrapper[4687]: I0312 16:28:19.395745 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d138177-7712-4706-8e2c-db53f8914cca","Type":"ContainerStarted","Data":"f701d4d6888beec9a5a86cf942f74f7aefbdb7832f670ad5d559aa6ff238a0b1"} Mar 12 16:28:19 crc kubenswrapper[4687]: I0312 16:28:19.396771 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d138177-7712-4706-8e2c-db53f8914cca","Type":"ContainerStarted","Data":"43b60deee0f9fe0056b82e67e151a3115f4f4d735b0c098f0c9aaba027e01118"} Mar 12 16:28:19 crc kubenswrapper[4687]: I0312 16:28:19.396861 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d138177-7712-4706-8e2c-db53f8914cca","Type":"ContainerStarted","Data":"199b09edb67eedad4c3a9b4e0bb319ee851266bae13478748c80927baee56a00"} Mar 12 16:28:19 crc kubenswrapper[4687]: I0312 16:28:19.412240 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.412222721 podStartE2EDuration="2.412222721s" podCreationTimestamp="2026-03-12 16:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:28:19.409781846 +0000 UTC m=+1548.373744190" watchObservedRunningTime="2026-03-12 16:28:19.412222721 +0000 UTC m=+1548.376185065" Mar 12 16:28:21 crc kubenswrapper[4687]: E0312 16:28:21.958805 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Mar 12 16:28:22 crc kubenswrapper[4687]: I0312 16:28:22.872198 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 16:28:22 crc kubenswrapper[4687]: I0312 16:28:22.872506 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 16:28:22 crc kubenswrapper[4687]: I0312 16:28:22.988592 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 16:28:22 crc kubenswrapper[4687]: I0312 16:28:22.988654 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 16:28:23 crc kubenswrapper[4687]: I0312 16:28:23.778535 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 16:28:23 crc kubenswrapper[4687]: I0312 16:28:23.810469 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 16:28:24 crc kubenswrapper[4687]: I0312 16:28:24.001522 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad69d278-88f9-4542-9673-664b522fd89c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:24 crc kubenswrapper[4687]: I0312 16:28:24.001551 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad69d278-88f9-4542-9673-664b522fd89c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:24 crc kubenswrapper[4687]: I0312 16:28:24.492406 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 16:28:26 crc kubenswrapper[4687]: I0312 16:28:26.734986 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:28:26 crc kubenswrapper[4687]: E0312 16:28:26.736386 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:28:27 crc kubenswrapper[4687]: I0312 16:28:27.379516 4687 scope.go:117] "RemoveContainer" containerID="d8957448fb78dd111b081ccd6ceaf0817d0165cd845e66f32c88f4cd81231f29" Mar 12 16:28:27 crc kubenswrapper[4687]: I0312 16:28:27.872607 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 16:28:27 crc kubenswrapper[4687]: I0312 16:28:27.872839 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 16:28:28 crc kubenswrapper[4687]: I0312 16:28:28.888524 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7d138177-7712-4706-8e2c-db53f8914cca" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:28 crc kubenswrapper[4687]: I0312 16:28:28.888573 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7d138177-7712-4706-8e2c-db53f8914cca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 16:28:32 crc kubenswrapper[4687]: I0312 16:28:32.377755 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 16:28:32 crc kubenswrapper[4687]: I0312 16:28:32.994668 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 16:28:32 crc kubenswrapper[4687]: I0312 16:28:32.995185 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 16:28:32 crc kubenswrapper[4687]: I0312 16:28:32.996502 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 16:28:33 crc kubenswrapper[4687]: I0312 16:28:33.001562 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 16:28:33 crc kubenswrapper[4687]: I0312 16:28:33.587622 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 16:28:33 crc kubenswrapper[4687]: I0312 16:28:33.597431 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.083270 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.083855 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b458feda-ef86-41d9-a1bc-3091254b086c" containerName="kube-state-metrics" containerID="cri-o://349a96d569511cce7e1aa8aebed25455e3777ae8ed981fc3b610167c0f18d1e7" gracePeriod=30 Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.198642 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.199282 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" containerName="mysqld-exporter" containerID="cri-o://bd8c38f6cd0da85a8b4edc6cbbec645ce6d6c58579356d5bdd85efc4d94e3d2d" gracePeriod=30 Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.643187 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" containerID="bd8c38f6cd0da85a8b4edc6cbbec645ce6d6c58579356d5bdd85efc4d94e3d2d" exitCode=2 Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.643392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6","Type":"ContainerDied","Data":"bd8c38f6cd0da85a8b4edc6cbbec645ce6d6c58579356d5bdd85efc4d94e3d2d"} Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.645555 4687 generic.go:334] "Generic (PLEG): container finished" podID="b458feda-ef86-41d9-a1bc-3091254b086c" containerID="349a96d569511cce7e1aa8aebed25455e3777ae8ed981fc3b610167c0f18d1e7" exitCode=2 Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.645596 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b458feda-ef86-41d9-a1bc-3091254b086c","Type":"ContainerDied","Data":"349a96d569511cce7e1aa8aebed25455e3777ae8ed981fc3b610167c0f18d1e7"} Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.645621 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b458feda-ef86-41d9-a1bc-3091254b086c","Type":"ContainerDied","Data":"71a2c20eeefadda1ab3cf20a89a397260cab32c3cc9d70a2b4a27ed33fc76c47"} Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.645635 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a2c20eeefadda1ab3cf20a89a397260cab32c3cc9d70a2b4a27ed33fc76c47" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.734666 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:28:37 crc kubenswrapper[4687]: E0312 16:28:37.734903 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.781453 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.789264 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.878416 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.881930 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.886503 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.905887 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-config-data\") pod \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.905972 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-combined-ca-bundle\") pod \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.905998 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbnsn\" (UniqueName: \"kubernetes.io/projected/b458feda-ef86-41d9-a1bc-3091254b086c-kube-api-access-bbnsn\") pod \"b458feda-ef86-41d9-a1bc-3091254b086c\" (UID: \"b458feda-ef86-41d9-a1bc-3091254b086c\") " Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.906131 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djqzr\" (UniqueName: \"kubernetes.io/projected/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-kube-api-access-djqzr\") pod \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\" (UID: \"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6\") " Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.919098 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b458feda-ef86-41d9-a1bc-3091254b086c-kube-api-access-bbnsn" (OuterVolumeSpecName: "kube-api-access-bbnsn") pod "b458feda-ef86-41d9-a1bc-3091254b086c" (UID: "b458feda-ef86-41d9-a1bc-3091254b086c"). InnerVolumeSpecName "kube-api-access-bbnsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.919200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-kube-api-access-djqzr" (OuterVolumeSpecName: "kube-api-access-djqzr") pod "d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" (UID: "d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6"). InnerVolumeSpecName "kube-api-access-djqzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:37 crc kubenswrapper[4687]: I0312 16:28:37.961562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" (UID: "d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.000919 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-config-data" (OuterVolumeSpecName: "config-data") pod "d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" (UID: "d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.009526 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.009562 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.009574 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbnsn\" (UniqueName: \"kubernetes.io/projected/b458feda-ef86-41d9-a1bc-3091254b086c-kube-api-access-bbnsn\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.009584 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djqzr\" (UniqueName: \"kubernetes.io/projected/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6-kube-api-access-djqzr\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.659197 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.659197 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.659228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6","Type":"ContainerDied","Data":"e04be4784875d78439338854bd5bd5ac355dce05f8a39074f50ff06acf1ea586"} Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.659856 4687 scope.go:117] "RemoveContainer" containerID="bd8c38f6cd0da85a8b4edc6cbbec645ce6d6c58579356d5bdd85efc4d94e3d2d" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.664799 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.740115 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.766423 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.785425 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.806954 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: E0312 16:28:38.807937 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b458feda-ef86-41d9-a1bc-3091254b086c" containerName="kube-state-metrics" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.807959 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b458feda-ef86-41d9-a1bc-3091254b086c" containerName="kube-state-metrics" Mar 12 16:28:38 crc kubenswrapper[4687]: E0312 16:28:38.807976 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" containerName="mysqld-exporter" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.807982 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" containerName="mysqld-exporter" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.808186 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b458feda-ef86-41d9-a1bc-3091254b086c" containerName="kube-state-metrics" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.808217 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" containerName="mysqld-exporter" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.816889 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.820233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.820492 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.821141 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.853024 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.864838 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.866454 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.868213 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.868862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.885246 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.948965 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949011 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949045 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949244 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfxc\" (UniqueName: \"kubernetes.io/projected/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-api-access-7wfxc\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-config-data\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949639 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949690 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:38 crc kubenswrapper[4687]: I0312 16:28:38.949925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6f9\" (UniqueName: \"kubernetes.io/projected/407b0a6d-21bf-462c-88b5-4326f412af6d-kube-api-access-bd6f9\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.051938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd6f9\" (UniqueName: \"kubernetes.io/projected/407b0a6d-21bf-462c-88b5-4326f412af6d-kube-api-access-bd6f9\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052031 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052139 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfxc\" (UniqueName: \"kubernetes.io/projected/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-api-access-7wfxc\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-config-data\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.052332 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.058970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.059774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.066165 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.066312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.066548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.066675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407b0a6d-21bf-462c-88b5-4326f412af6d-config-data\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.070576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd6f9\" (UniqueName: \"kubernetes.io/projected/407b0a6d-21bf-462c-88b5-4326f412af6d-kube-api-access-bd6f9\") pod \"mysqld-exporter-0\" (UID: \"407b0a6d-21bf-462c-88b5-4326f412af6d\") " pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.072845 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfxc\" (UniqueName: \"kubernetes.io/projected/550e3dcc-162b-4c82-8a8f-81e03e689772-kube-api-access-7wfxc\") pod \"kube-state-metrics-0\" (UID: \"550e3dcc-162b-4c82-8a8f-81e03e689772\") " pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.144267 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.182938 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.203040 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.203972 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="proxy-httpd" containerID="cri-o://5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07" gracePeriod=30 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.204055 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-notification-agent" containerID="cri-o://b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a" gracePeriod=30 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.203980 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="sg-core" containerID="cri-o://c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe" gracePeriod=30 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.204353 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-central-agent" containerID="cri-o://ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8" gracePeriod=30 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.681640 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.685116 4687 generic.go:334] "Generic (PLEG): container finished" podID="f5891051-42e9-4519-a759-9305c817e4b0" containerID="5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07" exitCode=0 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.685143 4687 generic.go:334] "Generic (PLEG): container finished" podID="f5891051-42e9-4519-a759-9305c817e4b0" containerID="c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe" exitCode=2 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.685153 4687 generic.go:334] "Generic (PLEG): container finished" podID="f5891051-42e9-4519-a759-9305c817e4b0" containerID="ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8" exitCode=0 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.685420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerDied","Data":"5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07"} Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.685458 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerDied","Data":"c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe"} Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.685471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerDied","Data":"ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8"} Mar 12 16:28:39 crc kubenswrapper[4687]: W0312 16:28:39.685880 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod407b0a6d_21bf_462c_88b5_4326f412af6d.slice/crio-ce546ff5f59fa7f5f5af8f9644c6ae8f066f7e7cf62e6339c5b4193a8423a3e2 WatchSource:0}: Error finding container ce546ff5f59fa7f5f5af8f9644c6ae8f066f7e7cf62e6339c5b4193a8423a3e2: Status 404 returned error can't find the container with id ce546ff5f59fa7f5f5af8f9644c6ae8f066f7e7cf62e6339c5b4193a8423a3e2 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.744850 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b458feda-ef86-41d9-a1bc-3091254b086c" path="/var/lib/kubelet/pods/b458feda-ef86-41d9-a1bc-3091254b086c/volumes" Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.746348 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6" path="/var/lib/kubelet/pods/d5ae6cf3-2bb9-4af3-8f90-8778d82d61f6/volumes" Mar 12 16:28:39 crc kubenswrapper[4687]: W0312 16:28:39.839434 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod550e3dcc_162b_4c82_8a8f_81e03e689772.slice/crio-e2aac8d6c0c726631492133367b7a41fdd19068c049121aebae1306796404091 WatchSource:0}: Error finding container e2aac8d6c0c726631492133367b7a41fdd19068c049121aebae1306796404091: Status 404 returned error can't find the container with id e2aac8d6c0c726631492133367b7a41fdd19068c049121aebae1306796404091 Mar 12 16:28:39 crc kubenswrapper[4687]: I0312 16:28:39.843688 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.710322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"550e3dcc-162b-4c82-8a8f-81e03e689772","Type":"ContainerStarted","Data":"a40dc9ed2037de9dc94cfb0d2ed959fdd50be897b7951cf7626041bd8a65eef7"} Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.711394 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.711417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"550e3dcc-162b-4c82-8a8f-81e03e689772","Type":"ContainerStarted","Data":"e2aac8d6c0c726631492133367b7a41fdd19068c049121aebae1306796404091"} Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.712630 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"407b0a6d-21bf-462c-88b5-4326f412af6d","Type":"ContainerStarted","Data":"24f12625f488dd41c689e1a4c67d6b9996476af73d8b16908335d9d52e7a46fd"} Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.712701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"407b0a6d-21bf-462c-88b5-4326f412af6d","Type":"ContainerStarted","Data":"ce546ff5f59fa7f5f5af8f9644c6ae8f066f7e7cf62e6339c5b4193a8423a3e2"} Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.738847 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.250601017 podStartE2EDuration="2.738627618s" podCreationTimestamp="2026-03-12 16:28:38 +0000 UTC" firstStartedPulling="2026-03-12 16:28:39.843878343 +0000 UTC m=+1568.807840687" lastFinishedPulling="2026-03-12 16:28:40.331904944 +0000 UTC m=+1569.295867288" observedRunningTime="2026-03-12 16:28:40.730387862 +0000 UTC m=+1569.694350206" watchObservedRunningTime="2026-03-12 16:28:40.738627618 +0000 UTC m=+1569.702589962" Mar 12 16:28:40 crc kubenswrapper[4687]: I0312 16:28:40.749307 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.219985348 podStartE2EDuration="2.74928499s" podCreationTimestamp="2026-03-12 16:28:38 +0000 UTC" firstStartedPulling="2026-03-12 16:28:39.688850955 +0000 UTC m=+1568.652813299" lastFinishedPulling="2026-03-12 16:28:40.218150607 +0000 UTC m=+1569.182112941" observedRunningTime="2026-03-12 16:28:40.748973661 +0000 UTC m=+1569.712936005" watchObservedRunningTime="2026-03-12 16:28:40.74928499 +0000 UTC m=+1569.713247334" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.666988 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.726586 4687 generic.go:334] "Generic (PLEG): container finished" podID="f5891051-42e9-4519-a759-9305c817e4b0" containerID="b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a" exitCode=0 Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.726669 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.726708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerDied","Data":"b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a"} Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.726759 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5891051-42e9-4519-a759-9305c817e4b0","Type":"ContainerDied","Data":"ecf6b6ec10f10360e832959d3fd440000180ab302cc45432bb15d061584f22a7"} Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.726778 4687 scope.go:117] "RemoveContainer" containerID="5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736195 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-log-httpd\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736306 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-combined-ca-bundle\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736386 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-config-data\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736416 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt7n4\" (UniqueName: \"kubernetes.io/projected/f5891051-42e9-4519-a759-9305c817e4b0-kube-api-access-nt7n4\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-scripts\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736485 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-run-httpd\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736568 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-sg-core-conf-yaml\") pod \"f5891051-42e9-4519-a759-9305c817e4b0\" (UID: \"f5891051-42e9-4519-a759-9305c817e4b0\") " Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.736841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.737528 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.738183 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.741775 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5891051-42e9-4519-a759-9305c817e4b0-kube-api-access-nt7n4" (OuterVolumeSpecName: "kube-api-access-nt7n4") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "kube-api-access-nt7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.741943 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-scripts" (OuterVolumeSpecName: "scripts") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.767439 4687 scope.go:117] "RemoveContainer" containerID="c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.774349 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.839968 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt7n4\" (UniqueName: \"kubernetes.io/projected/f5891051-42e9-4519-a759-9305c817e4b0-kube-api-access-nt7n4\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.839991 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.840000 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5891051-42e9-4519-a759-9305c817e4b0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.840008 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.852882 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.887626 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-config-data" (OuterVolumeSpecName: "config-data") pod "f5891051-42e9-4519-a759-9305c817e4b0" (UID: "f5891051-42e9-4519-a759-9305c817e4b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.888754 4687 scope.go:117] "RemoveContainer" containerID="b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.910334 4687 scope.go:117] "RemoveContainer" containerID="ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.932815 4687 scope.go:117] "RemoveContainer" containerID="5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07" Mar 12 16:28:41 crc kubenswrapper[4687]: E0312 16:28:41.933522 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07\": container with ID starting with 5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07 not found: ID does not exist" containerID="5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.933630 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07"} err="failed to get container status \"5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07\": rpc error: code = NotFound desc = could not find container \"5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07\": container with ID starting with 5b64b9564d4a74bfdf23abfba8ce18a93cf13f12d55ab37b5e32058940b00b07 not found: ID does not exist" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.933708 4687 scope.go:117] "RemoveContainer" containerID="c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe" Mar 12 16:28:41 crc kubenswrapper[4687]: E0312 16:28:41.934058 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe\": container with ID starting with c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe not found: ID does not exist" containerID="c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.934097 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe"} err="failed to get container status \"c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe\": rpc error: code = NotFound desc = could not find container \"c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe\": container with ID starting with c49d9fecb7225c2a41784996853367c0a13408893297120787bc623805fc3dbe not found: ID does not exist" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.934123 4687 scope.go:117] "RemoveContainer" containerID="b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a" Mar 12 16:28:41 crc kubenswrapper[4687]: E0312 16:28:41.934426 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a\": container with ID starting with b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a not found: ID does not exist" containerID="b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.935002 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a"} err="failed to get container status \"b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a\": rpc error: code = NotFound desc = could not find container \"b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a\": container with ID starting with b40f3917cb137aa5169b874e956f135b032e0e9f2219e8e429ede77fdb47f76a not found: ID does not exist" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.935165 4687 scope.go:117] "RemoveContainer" containerID="ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8" Mar 12 16:28:41 crc kubenswrapper[4687]: E0312 16:28:41.935568 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8\": container with ID starting with ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8 not found: ID does not exist" containerID="ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.935595 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8"} err="failed to get container status \"ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8\": rpc error: code = NotFound desc = could not find container \"ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8\": container with ID starting with ee038887e6cb2c004e8915075591c97bb12e49b327034a8aa92dcc836eb64aa8 not found: ID does not exist" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.942551 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:41 crc kubenswrapper[4687]: I0312 16:28:41.942680 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5891051-42e9-4519-a759-9305c817e4b0-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.074965 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.100772 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.114377 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:42 crc kubenswrapper[4687]: E0312 16:28:42.114964 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-notification-agent" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.114990 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-notification-agent" Mar 12 16:28:42 crc kubenswrapper[4687]: E0312 16:28:42.115002 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-central-agent" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115008 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-central-agent" Mar 12 16:28:42 crc kubenswrapper[4687]: E0312 16:28:42.115037 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="sg-core" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115042 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="sg-core" Mar 12 16:28:42 crc kubenswrapper[4687]: E0312 16:28:42.115071 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="proxy-httpd" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115077 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="proxy-httpd" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115310 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-central-agent" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115327 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="sg-core" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115340 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="proxy-httpd" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.115439 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5891051-42e9-4519-a759-9305c817e4b0" containerName="ceilometer-notification-agent" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.117618 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.120894 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.121130 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.124693 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.139802 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.248996 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdjlj\" (UniqueName: \"kubernetes.io/projected/faa1e24d-17d8-4d5c-8008-dba40687eaea-kube-api-access-rdjlj\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249059 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-log-httpd\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249144 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249231 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-scripts\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249304 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-run-httpd\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249478 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-config-data\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.249506 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354034 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-config-data\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354098 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdjlj\" (UniqueName: \"kubernetes.io/projected/faa1e24d-17d8-4d5c-8008-dba40687eaea-kube-api-access-rdjlj\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354232 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-log-httpd\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354387 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-scripts\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354460 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-run-httpd\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.354839 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-log-httpd\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.355218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-run-httpd\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.358093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.358218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.360150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-scripts\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.360896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.361207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-config-data\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.371491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdjlj\" (UniqueName: \"kubernetes.io/projected/faa1e24d-17d8-4d5c-8008-dba40687eaea-kube-api-access-rdjlj\") pod \"ceilometer-0\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.437286 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:42 crc kubenswrapper[4687]: I0312 16:28:42.972902 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:42 crc kubenswrapper[4687]: W0312 16:28:42.977238 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaa1e24d_17d8_4d5c_8008_dba40687eaea.slice/crio-0b19b1408ee1af214bef09198657c1fa30047edf32fe8e146643001e17f0c2ac WatchSource:0}: Error finding container 0b19b1408ee1af214bef09198657c1fa30047edf32fe8e146643001e17f0c2ac: Status 404 returned error can't find the container with id 0b19b1408ee1af214bef09198657c1fa30047edf32fe8e146643001e17f0c2ac Mar 12 16:28:43 crc kubenswrapper[4687]: I0312 16:28:43.749990 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5891051-42e9-4519-a759-9305c817e4b0" path="/var/lib/kubelet/pods/f5891051-42e9-4519-a759-9305c817e4b0/volumes" Mar 12 16:28:43 crc kubenswrapper[4687]: I0312 16:28:43.752808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerStarted","Data":"0b19b1408ee1af214bef09198657c1fa30047edf32fe8e146643001e17f0c2ac"} Mar 12 16:28:44 crc kubenswrapper[4687]: I0312 16:28:44.765478 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerStarted","Data":"c8059ee7f3863a814ba714804d9e22ef22b54a30f1457768cdc5c904baf19ced"} Mar 12 16:28:45 crc kubenswrapper[4687]: I0312 16:28:45.783516 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerStarted","Data":"8a7352899df864d74fb1e078a2aa253c3a284dd702bf4bccbb448aa2bd2ffbdb"} Mar 12 16:28:45 crc kubenswrapper[4687]: I0312 16:28:45.784308 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerStarted","Data":"9a01de34a0e7e72dfe5229ea0d42b0f1d35beda354292f3ec24011cdd5afde59"} Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.503504 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zd9d7"] Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.514581 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zd9d7"] Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.601989 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-dv82n"] Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.603897 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.615260 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dv82n"] Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.715759 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fpt\" (UniqueName: \"kubernetes.io/projected/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-kube-api-access-c7fpt\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.715857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-config-data\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.715919 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-combined-ca-bundle\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.733455 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:28:48 crc kubenswrapper[4687]: E0312 16:28:48.733739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.818782 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-config-data\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.818903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-combined-ca-bundle\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.819179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fpt\" (UniqueName: \"kubernetes.io/projected/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-kube-api-access-c7fpt\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.824607 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-combined-ca-bundle\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.825085 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-config-data\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.842483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fpt\" (UniqueName: \"kubernetes.io/projected/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-kube-api-access-c7fpt\") pod \"heat-db-sync-dv82n\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.842929 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerStarted","Data":"e32c24672382983066da997446056ba834e78872e54fb8d9191cff72552749eb"} Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.843579 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.867421 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.219241643 podStartE2EDuration="6.867402089s" podCreationTimestamp="2026-03-12 16:28:42 +0000 UTC" firstStartedPulling="2026-03-12 16:28:42.980977946 +0000 UTC m=+1571.944940290" lastFinishedPulling="2026-03-12 16:28:47.629138372 +0000 UTC m=+1576.593100736" observedRunningTime="2026-03-12 16:28:48.860412507 +0000 UTC m=+1577.824374871" watchObservedRunningTime="2026-03-12 16:28:48.867402089 +0000 UTC m=+1577.831364433" Mar 12 16:28:48 crc kubenswrapper[4687]: I0312 16:28:48.920056 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dv82n" Mar 12 16:28:49 crc kubenswrapper[4687]: I0312 16:28:49.200317 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 16:28:49 crc kubenswrapper[4687]: W0312 16:28:49.411793 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f9932b_ed80_41a1_aea7_60b0466ebe7b.slice/crio-38a22b15cb83af8f79d9df6fb4828fec82a7efec937fe617d0bc25500d989622 WatchSource:0}: Error finding container 38a22b15cb83af8f79d9df6fb4828fec82a7efec937fe617d0bc25500d989622: Status 404 returned error can't find the container with id 38a22b15cb83af8f79d9df6fb4828fec82a7efec937fe617d0bc25500d989622 Mar 12 16:28:49 crc kubenswrapper[4687]: I0312 16:28:49.460112 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dv82n"] Mar 12 16:28:49 crc kubenswrapper[4687]: I0312 16:28:49.752143 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9846a398-b104-418e-ace9-6eb022ddacbb" path="/var/lib/kubelet/pods/9846a398-b104-418e-ace9-6eb022ddacbb/volumes" Mar 12 16:28:49 crc kubenswrapper[4687]: I0312 16:28:49.856959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dv82n" event={"ID":"d5f9932b-ed80-41a1-aea7-60b0466ebe7b","Type":"ContainerStarted","Data":"38a22b15cb83af8f79d9df6fb4828fec82a7efec937fe617d0bc25500d989622"} Mar 12 16:28:50 crc kubenswrapper[4687]: I0312 16:28:50.800110 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.004790 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.005094 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-central-agent" containerID="cri-o://c8059ee7f3863a814ba714804d9e22ef22b54a30f1457768cdc5c904baf19ced" gracePeriod=30 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.005248 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="proxy-httpd" containerID="cri-o://e32c24672382983066da997446056ba834e78872e54fb8d9191cff72552749eb" gracePeriod=30 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.005303 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="sg-core" containerID="cri-o://8a7352899df864d74fb1e078a2aa253c3a284dd702bf4bccbb448aa2bd2ffbdb" gracePeriod=30 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.005349 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-notification-agent" containerID="cri-o://9a01de34a0e7e72dfe5229ea0d42b0f1d35beda354292f3ec24011cdd5afde59" gracePeriod=30 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.755823 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919692 4687 generic.go:334] "Generic (PLEG): container finished" podID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerID="e32c24672382983066da997446056ba834e78872e54fb8d9191cff72552749eb" exitCode=0 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919719 4687 generic.go:334] "Generic (PLEG): container finished" podID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerID="8a7352899df864d74fb1e078a2aa253c3a284dd702bf4bccbb448aa2bd2ffbdb" exitCode=2 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919728 4687 generic.go:334] "Generic (PLEG): container finished" podID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerID="9a01de34a0e7e72dfe5229ea0d42b0f1d35beda354292f3ec24011cdd5afde59" exitCode=0 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919756 4687 generic.go:334] "Generic (PLEG): container finished" podID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerID="c8059ee7f3863a814ba714804d9e22ef22b54a30f1457768cdc5c904baf19ced" exitCode=0 Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerDied","Data":"e32c24672382983066da997446056ba834e78872e54fb8d9191cff72552749eb"} Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919803 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerDied","Data":"8a7352899df864d74fb1e078a2aa253c3a284dd702bf4bccbb448aa2bd2ffbdb"} Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerDied","Data":"9a01de34a0e7e72dfe5229ea0d42b0f1d35beda354292f3ec24011cdd5afde59"} Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.919849 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerDied","Data":"c8059ee7f3863a814ba714804d9e22ef22b54a30f1457768cdc5c904baf19ced"} Mar 12 16:28:51 crc kubenswrapper[4687]: I0312 16:28:51.983352 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035064 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-config-data\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdjlj\" (UniqueName: \"kubernetes.io/projected/faa1e24d-17d8-4d5c-8008-dba40687eaea-kube-api-access-rdjlj\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035257 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-run-httpd\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-scripts\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035350 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-sg-core-conf-yaml\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035399 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-ceilometer-tls-certs\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035477 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-log-httpd\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035538 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-combined-ca-bundle\") pod \"faa1e24d-17d8-4d5c-8008-dba40687eaea\" (UID: \"faa1e24d-17d8-4d5c-8008-dba40687eaea\") " Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.035631 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.036048 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.041612 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa1e24d-17d8-4d5c-8008-dba40687eaea-kube-api-access-rdjlj" (OuterVolumeSpecName: "kube-api-access-rdjlj") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "kube-api-access-rdjlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.043302 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.044485 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-scripts" (OuterVolumeSpecName: "scripts") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.106459 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.129112 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.138535 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.138564 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.138574 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/faa1e24d-17d8-4d5c-8008-dba40687eaea-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.138583 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdjlj\" (UniqueName: \"kubernetes.io/projected/faa1e24d-17d8-4d5c-8008-dba40687eaea-kube-api-access-rdjlj\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.138592 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.188006 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.237782 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-config-data" (OuterVolumeSpecName: "config-data") pod "faa1e24d-17d8-4d5c-8008-dba40687eaea" (UID: "faa1e24d-17d8-4d5c-8008-dba40687eaea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.240312 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.240343 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1e24d-17d8-4d5c-8008-dba40687eaea-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.937193 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"faa1e24d-17d8-4d5c-8008-dba40687eaea","Type":"ContainerDied","Data":"0b19b1408ee1af214bef09198657c1fa30047edf32fe8e146643001e17f0c2ac"} Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.937241 4687 scope.go:117] "RemoveContainer" containerID="e32c24672382983066da997446056ba834e78872e54fb8d9191cff72552749eb" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.937443 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.963116 4687 scope.go:117] "RemoveContainer" containerID="8a7352899df864d74fb1e078a2aa253c3a284dd702bf4bccbb448aa2bd2ffbdb" Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.979814 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:52 crc kubenswrapper[4687]: I0312 16:28:52.997571 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.006637 4687 scope.go:117] "RemoveContainer" containerID="9a01de34a0e7e72dfe5229ea0d42b0f1d35beda354292f3ec24011cdd5afde59" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.008245 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:53 crc kubenswrapper[4687]: E0312 16:28:53.008703 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="proxy-httpd" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.008719 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="proxy-httpd" Mar 12 16:28:53 crc kubenswrapper[4687]: E0312 16:28:53.008751 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-central-agent" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.008758 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-central-agent" Mar 12 16:28:53 crc kubenswrapper[4687]: E0312 16:28:53.008775 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-notification-agent" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.008782 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-notification-agent" Mar 12 16:28:53 crc kubenswrapper[4687]: E0312 16:28:53.008802 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="sg-core" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.008808 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="sg-core" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.009007 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-notification-agent" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.009025 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="sg-core" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.009041 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="ceilometer-central-agent" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.009050 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" containerName="proxy-httpd" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.011086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.013603 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.014333 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.014472 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.031991 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.052176 4687 scope.go:117] "RemoveContainer" containerID="c8059ee7f3863a814ba714804d9e22ef22b54a30f1457768cdc5c904baf19ced" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057551 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-scripts\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057677 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057752 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgf97\" (UniqueName: \"kubernetes.io/projected/e68df22d-ffc8-4f99-a009-1133f37d9a67-kube-api-access-cgf97\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057825 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057848 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-config-data\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057882 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057917 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.057958 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160327 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160354 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-scripts\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgf97\" (UniqueName: \"kubernetes.io/projected/e68df22d-ffc8-4f99-a009-1133f37d9a67-kube-api-access-cgf97\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.160980 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.161040 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.161068 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-config-data\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.161405 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.166167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.166882 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.167549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-scripts\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.181390 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-config-data\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.181753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgf97\" (UniqueName: \"kubernetes.io/projected/e68df22d-ffc8-4f99-a009-1133f37d9a67-kube-api-access-cgf97\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.195191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.336349 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.748003 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa1e24d-17d8-4d5c-8008-dba40687eaea" path="/var/lib/kubelet/pods/faa1e24d-17d8-4d5c-8008-dba40687eaea/volumes" Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.865381 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 16:28:53 crc kubenswrapper[4687]: I0312 16:28:53.958185 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerStarted","Data":"4f06b8b856af03ed17d19d54d0665e264bde4a60934bf5edb5e6da0fbf9b1312"} Mar 12 16:28:55 crc kubenswrapper[4687]: I0312 16:28:55.332170 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" containerID="cri-o://7b0e59304d730b2823fb550356eb276687767419fd27e5af45ae6396cd176e8e" gracePeriod=604796 Mar 12 16:28:56 crc kubenswrapper[4687]: I0312 16:28:56.719181 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="rabbitmq" containerID="cri-o://963c2aca180558b10e64cb275b5a9ea2370cabec78549813d99926381e8a452b" gracePeriod=604796 Mar 12 16:28:58 crc kubenswrapper[4687]: I0312 16:28:58.256605 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Mar 12 16:28:58 crc kubenswrapper[4687]: I0312 16:28:58.553388 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 12 16:29:02 crc kubenswrapper[4687]: I0312 16:29:02.732940 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:29:02 crc kubenswrapper[4687]: E0312 16:29:02.734060 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:29:03 crc kubenswrapper[4687]: I0312 16:29:03.064137 4687 generic.go:334] "Generic (PLEG): container finished" podID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerID="963c2aca180558b10e64cb275b5a9ea2370cabec78549813d99926381e8a452b" exitCode=0 Mar 12 16:29:03 crc kubenswrapper[4687]: I0312 16:29:03.064188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d83f3b6-81ea-457d-815e-22eab66d4058","Type":"ContainerDied","Data":"963c2aca180558b10e64cb275b5a9ea2370cabec78549813d99926381e8a452b"} Mar 12 16:29:03 crc kubenswrapper[4687]: I0312 16:29:03.072813 4687 generic.go:334] "Generic (PLEG): container finished" podID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerID="7b0e59304d730b2823fb550356eb276687767419fd27e5af45ae6396cd176e8e" exitCode=0 Mar 12 16:29:03 crc kubenswrapper[4687]: I0312 16:29:03.072868 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"4ee0a72c-d057-4042-8611-c509c4ed3edf","Type":"ContainerDied","Data":"7b0e59304d730b2823fb550356eb276687767419fd27e5af45ae6396cd176e8e"} Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.828304 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nhcst"] Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.834342 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.836294 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.872816 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nhcst"] Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.911289 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.912201 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.912673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.912885 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-config\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.913556 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.913894 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxp2\" (UniqueName: \"kubernetes.io/projected/402d753d-5ffe-4835-9d73-cfe434a79c71-kube-api-access-mxxp2\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:05 crc kubenswrapper[4687]: I0312 16:29:05.914632 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.016782 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.016885 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.016919 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-config\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.016947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.017002 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxp2\" (UniqueName: \"kubernetes.io/projected/402d753d-5ffe-4835-9d73-cfe434a79c71-kube-api-access-mxxp2\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.017039 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.017159 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.017720 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.017861 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.018259 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.018636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.018843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-config\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.019176 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.038855 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxp2\" (UniqueName: \"kubernetes.io/projected/402d753d-5ffe-4835-9d73-cfe434a79c71-kube-api-access-mxxp2\") pod \"dnsmasq-dns-7d84b4d45c-nhcst\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:06 crc kubenswrapper[4687]: I0312 16:29:06.166298 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:08 crc kubenswrapper[4687]: I0312 16:29:08.256583 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Mar 12 16:29:08 crc kubenswrapper[4687]: I0312 16:29:08.553049 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 12 16:29:11 crc kubenswrapper[4687]: E0312 16:29:11.873601 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 12 16:29:11 crc kubenswrapper[4687]: E0312 16:29:11.874072 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 12 16:29:11 crc kubenswrapper[4687]: E0312 16:29:11.874217 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7fpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-dv82n_openstack(d5f9932b-ed80-41a1-aea7-60b0466ebe7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:29:11 crc kubenswrapper[4687]: E0312 16:29:11.875431 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-dv82n" podUID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" Mar 12 16:29:11 crc kubenswrapper[4687]: I0312 16:29:11.957637 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.080406 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d83f3b6-81ea-457d-815e-22eab66d4058-erlang-cookie-secret\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081234 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-plugins\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081263 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-server-conf\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081295 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-tls\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081379 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-confd\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081434 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-plugins-conf\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081476 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-erlang-cookie\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081533 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbfpr\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-kube-api-access-pbfpr\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081584 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d83f3b6-81ea-457d-815e-22eab66d4058-pod-info\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.081619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-config-data\") pod \"2d83f3b6-81ea-457d-815e-22eab66d4058\" (UID: \"2d83f3b6-81ea-457d-815e-22eab66d4058\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.082489 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.083326 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.085935 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.086606 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d83f3b6-81ea-457d-815e-22eab66d4058-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.087764 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.094569 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.102879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-kube-api-access-pbfpr" (OuterVolumeSpecName: "kube-api-access-pbfpr") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "kube-api-access-pbfpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.103036 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2d83f3b6-81ea-457d-815e-22eab66d4058-pod-info" (OuterVolumeSpecName: "pod-info") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.133542 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f" (OuterVolumeSpecName: "persistence") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.144173 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-config-data" (OuterVolumeSpecName: "config-data") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.157533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-server-conf" (OuterVolumeSpecName: "server-conf") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185680 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d83f3b6-81ea-457d-815e-22eab66d4058-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185732 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") on node \"crc\" " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185745 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185754 4687 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185765 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185774 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185782 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbfpr\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-kube-api-access-pbfpr\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185790 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d83f3b6-81ea-457d-815e-22eab66d4058-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.185798 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d83f3b6-81ea-457d-815e-22eab66d4058-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.219377 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d83f3b6-81ea-457d-815e-22eab66d4058","Type":"ContainerDied","Data":"42c12be576425074aa5393de159ab517b138bb990d79929ad4aa09a1b4f5eb3a"} Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.219455 4687 scope.go:117] "RemoveContainer" containerID="963c2aca180558b10e64cb275b5a9ea2370cabec78549813d99926381e8a452b" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.219404 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.221750 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-dv82n" podUID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.226523 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.227254 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f") on node "crc" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.264488 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2d83f3b6-81ea-457d-815e-22eab66d4058" (UID: "2d83f3b6-81ea-457d-815e-22eab66d4058"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.288056 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.288094 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d83f3b6-81ea-457d-815e-22eab66d4058-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.517549 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.528001 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.528397 4687 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.528702 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n556h68chd5h687h548hbbh65ch569hf7h5c8h668hb5h54bh9dh7ch679h5cbh5f8h64fh65dh548h5cfh568h55dh96h675h96h67h67dh556h57fh94q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgf97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e68df22d-ffc8-4f99-a009-1133f37d9a67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.557267 4687 scope.go:117] "RemoveContainer" containerID="cbb7ab54062a9c089c1ea7df44f36b511eb1f51b9b3ba64674dd3f4420154282" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.626009 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.697848 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-confd\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.698155 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-plugins-conf\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.698316 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ee0a72c-d057-4042-8611-c509c4ed3edf-erlang-cookie-secret\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.698486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-server-conf\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.698707 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ee0a72c-d057-4042-8611-c509c4ed3edf-pod-info\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.698852 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-config-data\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.699212 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.701691 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.701881 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-tls\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.702024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxzqw\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-kube-api-access-hxzqw\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.702152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-erlang-cookie\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.702284 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-plugins\") pod \"4ee0a72c-d057-4042-8611-c509c4ed3edf\" (UID: \"4ee0a72c-d057-4042-8611-c509c4ed3edf\") " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.704153 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.708283 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.709219 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.711596 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.721686 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-kube-api-access-hxzqw" (OuterVolumeSpecName: "kube-api-access-hxzqw") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "kube-api-access-hxzqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.730855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4ee0a72c-d057-4042-8611-c509c4ed3edf-pod-info" (OuterVolumeSpecName: "pod-info") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.731495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee0a72c-d057-4042-8611-c509c4ed3edf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.732576 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.743562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807" (OuterVolumeSpecName: "persistence") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.752747 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-config-data" (OuterVolumeSpecName: "config-data") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.752802 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.753422 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="setup-container" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.753439 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="setup-container" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.753454 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="setup-container" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.753461 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="setup-container" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.753484 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="rabbitmq" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.753490 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="rabbitmq" Mar 12 16:29:12 crc kubenswrapper[4687]: E0312 16:29:12.753509 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.753515 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.753740 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" containerName="rabbitmq" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.753763 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" containerName="rabbitmq" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.755121 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.757204 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.758491 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.758697 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jw674" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.758821 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.758937 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.759735 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.760164 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.784322 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.785339 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-server-conf" (OuterVolumeSpecName: "server-conf") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806733 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ee0a72c-d057-4042-8611-c509c4ed3edf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806759 4687 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806772 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ee0a72c-d057-4042-8611-c509c4ed3edf-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806784 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ee0a72c-d057-4042-8611-c509c4ed3edf-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806822 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") on node \"crc\" " Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806838 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806851 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxzqw\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-kube-api-access-hxzqw\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806867 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.806879 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.885206 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.885372 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807") on node "crc" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4d17b17-5362-40cb-ab0d-d39d96702d69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4d17b17-5362-40cb-ab0d-d39d96702d69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910866 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.910881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.911047 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.911081 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.912055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.912164 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnld\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-kube-api-access-sfnld\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.912470 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:12 crc kubenswrapper[4687]: I0312 16:29:12.995684 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4ee0a72c-d057-4042-8611-c509c4ed3edf" (UID: "4ee0a72c-d057-4042-8611-c509c4ed3edf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.020602 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.020721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.020809 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.020859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnld\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-kube-api-access-sfnld\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4d17b17-5362-40cb-ab0d-d39d96702d69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025609 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025792 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4d17b17-5362-40cb-ab0d-d39d96702d69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025871 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.025932 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.026153 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ee0a72c-d057-4042-8611-c509c4ed3edf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.026496 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.029734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.030178 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4d17b17-5362-40cb-ab0d-d39d96702d69-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.030516 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.031883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.037223 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4d17b17-5362-40cb-ab0d-d39d96702d69-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.038436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.039196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.040172 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.040203 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aab5bd533d8baf2f941bc7008b1a1aa0f57238177d0ea423d0fb2976721576da/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.042903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4d17b17-5362-40cb-ab0d-d39d96702d69-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.045629 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnld\" (UniqueName: \"kubernetes.io/projected/c4d17b17-5362-40cb-ab0d-d39d96702d69-kube-api-access-sfnld\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.098089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff90bf11-d278-4cff-81cd-5418b0ed0e9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4d17b17-5362-40cb-ab0d-d39d96702d69\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.202155 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nhcst"] Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.235318 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"4ee0a72c-d057-4042-8611-c509c4ed3edf","Type":"ContainerDied","Data":"9c88622f4a366698fe8db794f3a9b5e12666021208a468d8950f4432351f7e4e"} Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.235802 4687 scope.go:117] "RemoveContainer" containerID="7b0e59304d730b2823fb550356eb276687767419fd27e5af45ae6396cd176e8e" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.235439 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.236752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" event={"ID":"402d753d-5ffe-4835-9d73-cfe434a79c71","Type":"ContainerStarted","Data":"5ebdbc258b950df8d95f668c49d3dabb556115261c65d65b9c4fb25d838ca4f7"} Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.282819 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.291972 4687 scope.go:117] "RemoveContainer" containerID="be93c8a792c8bdf44ed1a5742f64d7c5582812aaca1bff03b025d337a8592e0f" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.314901 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.332156 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.334704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.335522 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.392411 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438766 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s59n\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-kube-api-access-5s59n\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/457918a9-182a-4e9e-b03f-ab58128edc95-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438912 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438932 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-server-conf\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.438984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.439015 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/457918a9-182a-4e9e-b03f-ab58128edc95-pod-info\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.439031 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.439066 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-config-data\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.439134 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.541024 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.541077 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.541106 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-server-conf\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.541183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.541661 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.542771 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-server-conf\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.542848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/457918a9-182a-4e9e-b03f-ab58128edc95-pod-info\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.542880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.542969 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-config-data\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.543612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.543711 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s59n\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-kube-api-access-5s59n\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.543783 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.543712 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.544309 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.544409 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/457918a9-182a-4e9e-b03f-ab58128edc95-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.544687 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/457918a9-182a-4e9e-b03f-ab58128edc95-config-data\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.548977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/457918a9-182a-4e9e-b03f-ab58128edc95-pod-info\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.549145 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.549244 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.550284 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.550563 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46a24cb589ac69273944e258cc5d0edab0c26fdb6edbe87b89c93929adde0f2a/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.552465 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/457918a9-182a-4e9e-b03f-ab58128edc95-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.563121 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s59n\" (UniqueName: \"kubernetes.io/projected/457918a9-182a-4e9e-b03f-ab58128edc95-kube-api-access-5s59n\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.635975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-833bfcf2-b5f8-4ce8-ac30-b2917ed6d807\") pod \"rabbitmq-server-2\" (UID: \"457918a9-182a-4e9e-b03f-ab58128edc95\") " pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.672122 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.734046 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:29:13 crc kubenswrapper[4687]: E0312 16:29:13.734645 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.749657 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d83f3b6-81ea-457d-815e-22eab66d4058" path="/var/lib/kubelet/pods/2d83f3b6-81ea-457d-815e-22eab66d4058/volumes" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.752750 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee0a72c-d057-4042-8611-c509c4ed3edf" path="/var/lib/kubelet/pods/4ee0a72c-d057-4042-8611-c509c4ed3edf/volumes" Mar 12 16:29:13 crc kubenswrapper[4687]: I0312 16:29:13.893297 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 16:29:13 crc kubenswrapper[4687]: W0312 16:29:13.909751 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d17b17_5362_40cb_ab0d_d39d96702d69.slice/crio-16acd6c558e4f991aefbeaf1b3ce722528dcdc88d743ddeb7a12a2425c021e3c WatchSource:0}: Error finding container 16acd6c558e4f991aefbeaf1b3ce722528dcdc88d743ddeb7a12a2425c021e3c: Status 404 returned error can't find the container with id 16acd6c558e4f991aefbeaf1b3ce722528dcdc88d743ddeb7a12a2425c021e3c Mar 12 16:29:14 crc kubenswrapper[4687]: I0312 16:29:14.179015 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 12 16:29:14 crc kubenswrapper[4687]: I0312 16:29:14.335180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"457918a9-182a-4e9e-b03f-ab58128edc95","Type":"ContainerStarted","Data":"2fa08da62abd07e4b030200777caf4546f7c053d89ec3c58ad36acc371888cfb"} Mar 12 16:29:14 crc kubenswrapper[4687]: I0312 16:29:14.405015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerStarted","Data":"e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42"} Mar 12 16:29:14 crc kubenswrapper[4687]: I0312 16:29:14.430581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4d17b17-5362-40cb-ab0d-d39d96702d69","Type":"ContainerStarted","Data":"16acd6c558e4f991aefbeaf1b3ce722528dcdc88d743ddeb7a12a2425c021e3c"} Mar 12 16:29:14 crc kubenswrapper[4687]: I0312 16:29:14.466210 4687 generic.go:334] "Generic (PLEG): container finished" podID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerID="af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e" exitCode=0 Mar 12 16:29:14 crc kubenswrapper[4687]: I0312 16:29:14.466270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" event={"ID":"402d753d-5ffe-4835-9d73-cfe434a79c71","Type":"ContainerDied","Data":"af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e"} Mar 12 16:29:15 crc kubenswrapper[4687]: I0312 16:29:15.482219 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerStarted","Data":"ea53b9ca3f6bca1e34c53af17d1b75ec8028f5f493be6cdacd088905cacd63e5"} Mar 12 16:29:15 crc kubenswrapper[4687]: I0312 16:29:15.485746 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" event={"ID":"402d753d-5ffe-4835-9d73-cfe434a79c71","Type":"ContainerStarted","Data":"7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194"} Mar 12 16:29:15 crc kubenswrapper[4687]: I0312 16:29:15.486023 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:15 crc kubenswrapper[4687]: I0312 16:29:15.517990 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" podStartSLOduration=10.5179642 podStartE2EDuration="10.5179642s" podCreationTimestamp="2026-03-12 16:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:29:15.509793687 +0000 UTC m=+1604.473756041" watchObservedRunningTime="2026-03-12 16:29:15.5179642 +0000 UTC m=+1604.481926554" Mar 12 16:29:16 crc kubenswrapper[4687]: I0312 16:29:16.502950 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4d17b17-5362-40cb-ab0d-d39d96702d69","Type":"ContainerStarted","Data":"e22c0caa3dcca5cfcf4fe8a3b02481194f2086d37e6f4f5874bb6cd88299d9a2"} Mar 12 16:29:16 crc kubenswrapper[4687]: I0312 16:29:16.506747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"457918a9-182a-4e9e-b03f-ab58128edc95","Type":"ContainerStarted","Data":"04bab90ecf798692ca52e96f52f923bd48117b5d247574efff997be0dad6ffb5"} Mar 12 16:29:16 crc kubenswrapper[4687]: E0312 16:29:16.577729 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" Mar 12 16:29:17 crc kubenswrapper[4687]: I0312 16:29:17.520007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerStarted","Data":"fb295708edbb2f5907b5c1b3de1a374d998066ad455cfc52bb7791d984a321e3"} Mar 12 16:29:17 crc kubenswrapper[4687]: I0312 16:29:17.520892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 16:29:17 crc kubenswrapper[4687]: E0312 16:29:17.521987 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" Mar 12 16:29:18 crc kubenswrapper[4687]: E0312 16:29:18.530656 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.168523 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.248998 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9r576"] Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.249529 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" podUID="80c26f7b-2406-431b-990b-99c716d8860a" containerName="dnsmasq-dns" containerID="cri-o://11c631cf9e6ef62846cc8e0be2de6e26a94e5dd39e461b06981db7ddb2f5158d" gracePeriod=10 Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.457508 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-lsjxb"] Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.459834 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.488457 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-lsjxb"] Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543068 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543097 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543124 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543419 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-config\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfsl\" (UniqueName: \"kubernetes.io/projected/1dd00953-0208-426f-a053-88364a767791-kube-api-access-zwfsl\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.543652 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.571738 4687 generic.go:334] "Generic (PLEG): container finished" podID="80c26f7b-2406-431b-990b-99c716d8860a" containerID="11c631cf9e6ef62846cc8e0be2de6e26a94e5dd39e461b06981db7ddb2f5158d" exitCode=0 Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.571787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" event={"ID":"80c26f7b-2406-431b-990b-99c716d8860a","Type":"ContainerDied","Data":"11c631cf9e6ef62846cc8e0be2de6e26a94e5dd39e461b06981db7ddb2f5158d"} Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfsl\" (UniqueName: \"kubernetes.io/projected/1dd00953-0208-426f-a053-88364a767791-kube-api-access-zwfsl\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646468 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646609 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.646810 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-config\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.648013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.648164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.648167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.648306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.649270 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-config\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.649689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd00953-0208-426f-a053-88364a767791-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.667215 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfsl\" (UniqueName: \"kubernetes.io/projected/1dd00953-0208-426f-a053-88364a767791-kube-api-access-zwfsl\") pod \"dnsmasq-dns-6f6df4f56c-lsjxb\" (UID: \"1dd00953-0208-426f-a053-88364a767791\") " pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.782240 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:21 crc kubenswrapper[4687]: I0312 16:29:21.978933 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.064273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-swift-storage-0\") pod \"80c26f7b-2406-431b-990b-99c716d8860a\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.064441 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-nb\") pod \"80c26f7b-2406-431b-990b-99c716d8860a\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.064479 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-config\") pod \"80c26f7b-2406-431b-990b-99c716d8860a\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.064510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-svc\") pod \"80c26f7b-2406-431b-990b-99c716d8860a\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.064555 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz85w\" (UniqueName: \"kubernetes.io/projected/80c26f7b-2406-431b-990b-99c716d8860a-kube-api-access-vz85w\") pod \"80c26f7b-2406-431b-990b-99c716d8860a\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.064650 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-sb\") pod \"80c26f7b-2406-431b-990b-99c716d8860a\" (UID: \"80c26f7b-2406-431b-990b-99c716d8860a\") " Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.119600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c26f7b-2406-431b-990b-99c716d8860a-kube-api-access-vz85w" (OuterVolumeSpecName: "kube-api-access-vz85w") pod "80c26f7b-2406-431b-990b-99c716d8860a" (UID: "80c26f7b-2406-431b-990b-99c716d8860a"). InnerVolumeSpecName "kube-api-access-vz85w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.132245 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80c26f7b-2406-431b-990b-99c716d8860a" (UID: "80c26f7b-2406-431b-990b-99c716d8860a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.172763 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.172797 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz85w\" (UniqueName: \"kubernetes.io/projected/80c26f7b-2406-431b-990b-99c716d8860a-kube-api-access-vz85w\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.176970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-config" (OuterVolumeSpecName: "config") pod "80c26f7b-2406-431b-990b-99c716d8860a" (UID: "80c26f7b-2406-431b-990b-99c716d8860a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.199988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80c26f7b-2406-431b-990b-99c716d8860a" (UID: "80c26f7b-2406-431b-990b-99c716d8860a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.233482 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "80c26f7b-2406-431b-990b-99c716d8860a" (UID: "80c26f7b-2406-431b-990b-99c716d8860a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.237772 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80c26f7b-2406-431b-990b-99c716d8860a" (UID: "80c26f7b-2406-431b-990b-99c716d8860a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.275019 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.275083 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.275093 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.275102 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80c26f7b-2406-431b-990b-99c716d8860a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.335319 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-lsjxb"] Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.585169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" event={"ID":"1dd00953-0208-426f-a053-88364a767791","Type":"ContainerStarted","Data":"7eb3fbc9da26b59875ae0cb9ac51991f9a05e284ded18584300d47a3be3f6033"} Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.587329 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" event={"ID":"80c26f7b-2406-431b-990b-99c716d8860a","Type":"ContainerDied","Data":"b9e5b60b5fcdfda02afdd1c06cfece11c3836d8d8a940d9c285d9981b7dff953"} Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.587381 4687 scope.go:117] "RemoveContainer" containerID="11c631cf9e6ef62846cc8e0be2de6e26a94e5dd39e461b06981db7ddb2f5158d" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.587463 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-9r576" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.613655 4687 scope.go:117] "RemoveContainer" containerID="aeb9e3c1423e7caee15e6aa96229c940ed17e3395e2315c6da8ee6458acbe250" Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.626471 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9r576"] Mar 12 16:29:22 crc kubenswrapper[4687]: I0312 16:29:22.638149 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-9r576"] Mar 12 16:29:23 crc kubenswrapper[4687]: I0312 16:29:23.598903 4687 generic.go:334] "Generic (PLEG): container finished" podID="1dd00953-0208-426f-a053-88364a767791" containerID="87862610f875ea87a19344540d71fe7a247c62463b0ccdf81ab9082e31df0d11" exitCode=0 Mar 12 16:29:23 crc kubenswrapper[4687]: I0312 16:29:23.598975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" event={"ID":"1dd00953-0208-426f-a053-88364a767791","Type":"ContainerDied","Data":"87862610f875ea87a19344540d71fe7a247c62463b0ccdf81ab9082e31df0d11"} Mar 12 16:29:23 crc kubenswrapper[4687]: I0312 16:29:23.770020 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c26f7b-2406-431b-990b-99c716d8860a" path="/var/lib/kubelet/pods/80c26f7b-2406-431b-990b-99c716d8860a/volumes" Mar 12 16:29:24 crc kubenswrapper[4687]: I0312 16:29:24.615427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" event={"ID":"1dd00953-0208-426f-a053-88364a767791","Type":"ContainerStarted","Data":"7b36167456a21e6e516329a8dae51b9dbe767f3f2d3d0f8ca7ebb49027346cd7"} Mar 12 16:29:24 crc kubenswrapper[4687]: I0312 16:29:24.615685 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:24 crc kubenswrapper[4687]: I0312 16:29:24.640929 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" podStartSLOduration=3.64091302 podStartE2EDuration="3.64091302s" podCreationTimestamp="2026-03-12 16:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:29:24.635903743 +0000 UTC m=+1613.599866097" watchObservedRunningTime="2026-03-12 16:29:24.64091302 +0000 UTC m=+1613.604875364" Mar 12 16:29:25 crc kubenswrapper[4687]: I0312 16:29:25.630155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dv82n" event={"ID":"d5f9932b-ed80-41a1-aea7-60b0466ebe7b","Type":"ContainerStarted","Data":"8d62bdd938592fd042b95a13fec5a580d6c38e9bd755ec9cf81f56b00e12b846"} Mar 12 16:29:25 crc kubenswrapper[4687]: I0312 16:29:25.657501 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-dv82n" podStartSLOduration=2.104510524 podStartE2EDuration="37.657481443s" podCreationTimestamp="2026-03-12 16:28:48 +0000 UTC" firstStartedPulling="2026-03-12 16:28:49.414953891 +0000 UTC m=+1578.378916235" lastFinishedPulling="2026-03-12 16:29:24.96792481 +0000 UTC m=+1613.931887154" observedRunningTime="2026-03-12 16:29:25.652706373 +0000 UTC m=+1614.616668727" watchObservedRunningTime="2026-03-12 16:29:25.657481443 +0000 UTC m=+1614.621443787" Mar 12 16:29:27 crc kubenswrapper[4687]: I0312 16:29:27.652988 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" containerID="8d62bdd938592fd042b95a13fec5a580d6c38e9bd755ec9cf81f56b00e12b846" exitCode=0 Mar 12 16:29:27 crc kubenswrapper[4687]: I0312 16:29:27.653286 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dv82n" event={"ID":"d5f9932b-ed80-41a1-aea7-60b0466ebe7b","Type":"ContainerDied","Data":"8d62bdd938592fd042b95a13fec5a580d6c38e9bd755ec9cf81f56b00e12b846"} Mar 12 16:29:27 crc kubenswrapper[4687]: I0312 16:29:27.809629 4687 scope.go:117] "RemoveContainer" containerID="349a96d569511cce7e1aa8aebed25455e3777ae8ed981fc3b610167c0f18d1e7" Mar 12 16:29:28 crc kubenswrapper[4687]: I0312 16:29:28.732999 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:29:28 crc kubenswrapper[4687]: E0312 16:29:28.733613 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.179379 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dv82n" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.254654 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-config-data\") pod \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.254840 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-combined-ca-bundle\") pod \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.255033 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7fpt\" (UniqueName: \"kubernetes.io/projected/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-kube-api-access-c7fpt\") pod \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\" (UID: \"d5f9932b-ed80-41a1-aea7-60b0466ebe7b\") " Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.260679 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-kube-api-access-c7fpt" (OuterVolumeSpecName: "kube-api-access-c7fpt") pod "d5f9932b-ed80-41a1-aea7-60b0466ebe7b" (UID: "d5f9932b-ed80-41a1-aea7-60b0466ebe7b"). InnerVolumeSpecName "kube-api-access-c7fpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.296014 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5f9932b-ed80-41a1-aea7-60b0466ebe7b" (UID: "d5f9932b-ed80-41a1-aea7-60b0466ebe7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.344453 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-config-data" (OuterVolumeSpecName: "config-data") pod "d5f9932b-ed80-41a1-aea7-60b0466ebe7b" (UID: "d5f9932b-ed80-41a1-aea7-60b0466ebe7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.358157 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7fpt\" (UniqueName: \"kubernetes.io/projected/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-kube-api-access-c7fpt\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.358199 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.358212 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f9932b-ed80-41a1-aea7-60b0466ebe7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.683317 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dv82n" event={"ID":"d5f9932b-ed80-41a1-aea7-60b0466ebe7b","Type":"ContainerDied","Data":"38a22b15cb83af8f79d9df6fb4828fec82a7efec937fe617d0bc25500d989622"} Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.683399 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a22b15cb83af8f79d9df6fb4828fec82a7efec937fe617d0bc25500d989622" Mar 12 16:29:29 crc kubenswrapper[4687]: I0312 16:29:29.683469 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dv82n" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.853726 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-b87795674-d968s"] Mar 12 16:29:30 crc kubenswrapper[4687]: E0312 16:29:30.858724 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c26f7b-2406-431b-990b-99c716d8860a" containerName="init" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.858782 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c26f7b-2406-431b-990b-99c716d8860a" containerName="init" Mar 12 16:29:30 crc kubenswrapper[4687]: E0312 16:29:30.858831 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c26f7b-2406-431b-990b-99c716d8860a" containerName="dnsmasq-dns" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.858840 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c26f7b-2406-431b-990b-99c716d8860a" containerName="dnsmasq-dns" Mar 12 16:29:30 crc kubenswrapper[4687]: E0312 16:29:30.858854 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" containerName="heat-db-sync" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.858862 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" containerName="heat-db-sync" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.859382 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c26f7b-2406-431b-990b-99c716d8860a" containerName="dnsmasq-dns" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.859408 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" containerName="heat-db-sync" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.860481 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.873298 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-b87795674-d968s"] Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.897616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzc2l\" (UniqueName: \"kubernetes.io/projected/667072f7-1d8a-4f67-87bb-f587f6384ffd-kube-api-access-zzc2l\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.897680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-combined-ca-bundle\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.897786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-config-data\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.897850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-config-data-custom\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.936670 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-76878d96fd-ck8qd"] Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.938795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.948565 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6cd988bd5d-l6ddg"] Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.952354 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.973241 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76878d96fd-ck8qd"] Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.996925 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cd988bd5d-l6ddg"] Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999729 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-combined-ca-bundle\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999794 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-combined-ca-bundle\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999829 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-public-tls-certs\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-config-data\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-config-data-custom\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999925 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-config-data\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:30 crc kubenswrapper[4687]: I0312 16:29:30.999945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-config-data-custom\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-config-data-custom\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-internal-tls-certs\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000065 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-internal-tls-certs\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-config-data\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-public-tls-certs\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000632 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztgm\" (UniqueName: \"kubernetes.io/projected/61cc010a-4cd7-4938-b38e-4af19ead4e50-kube-api-access-kztgm\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.000731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzc2l\" (UniqueName: \"kubernetes.io/projected/667072f7-1d8a-4f67-87bb-f587f6384ffd-kube-api-access-zzc2l\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.001252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-combined-ca-bundle\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.001872 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jls\" (UniqueName: \"kubernetes.io/projected/95929173-9929-400b-be3f-2fee62cbab3d-kube-api-access-64jls\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.006591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-config-data-custom\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.006812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-combined-ca-bundle\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.013145 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/667072f7-1d8a-4f67-87bb-f587f6384ffd-config-data\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.026728 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzc2l\" (UniqueName: \"kubernetes.io/projected/667072f7-1d8a-4f67-87bb-f587f6384ffd-kube-api-access-zzc2l\") pod \"heat-engine-b87795674-d968s\" (UID: \"667072f7-1d8a-4f67-87bb-f587f6384ffd\") " pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-config-data\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104668 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-public-tls-certs\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104705 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kztgm\" (UniqueName: \"kubernetes.io/projected/61cc010a-4cd7-4938-b38e-4af19ead4e50-kube-api-access-kztgm\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104804 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jls\" (UniqueName: \"kubernetes.io/projected/95929173-9929-400b-be3f-2fee62cbab3d-kube-api-access-64jls\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104864 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-combined-ca-bundle\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-combined-ca-bundle\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104928 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-public-tls-certs\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.104986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-config-data\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.105012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-config-data-custom\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.105048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-config-data-custom\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.105132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-internal-tls-certs\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.105171 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-internal-tls-certs\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.110131 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-combined-ca-bundle\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.111701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-public-tls-certs\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.111915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-combined-ca-bundle\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.112099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-public-tls-certs\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.113149 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-config-data-custom\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.121520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-internal-tls-certs\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.121904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-config-data\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.124226 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-internal-tls-certs\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.125079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61cc010a-4cd7-4938-b38e-4af19ead4e50-config-data\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.125142 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95929173-9929-400b-be3f-2fee62cbab3d-config-data-custom\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.128080 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jls\" (UniqueName: \"kubernetes.io/projected/95929173-9929-400b-be3f-2fee62cbab3d-kube-api-access-64jls\") pod \"heat-api-6cd988bd5d-l6ddg\" (UID: \"95929173-9929-400b-be3f-2fee62cbab3d\") " pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.128118 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kztgm\" (UniqueName: \"kubernetes.io/projected/61cc010a-4cd7-4938-b38e-4af19ead4e50-kube-api-access-kztgm\") pod \"heat-cfnapi-76878d96fd-ck8qd\" (UID: \"61cc010a-4cd7-4938-b38e-4af19ead4e50\") " pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.193708 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.253761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.275594 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.713387 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-b87795674-d968s"] Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.792226 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-lsjxb" Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.944517 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nhcst"] Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.944828 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerName="dnsmasq-dns" containerID="cri-o://7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194" gracePeriod=10 Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.958449 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76878d96fd-ck8qd"] Mar 12 16:29:31 crc kubenswrapper[4687]: I0312 16:29:31.979195 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cd988bd5d-l6ddg"] Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.627540 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.671843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-svc\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.671893 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-sb\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.671929 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-config\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.671981 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-swift-storage-0\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.674346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxp2\" (UniqueName: \"kubernetes.io/projected/402d753d-5ffe-4835-9d73-cfe434a79c71-kube-api-access-mxxp2\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.675006 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-openstack-edpm-ipam\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.675168 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-nb\") pod \"402d753d-5ffe-4835-9d73-cfe434a79c71\" (UID: \"402d753d-5ffe-4835-9d73-cfe434a79c71\") " Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.712381 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402d753d-5ffe-4835-9d73-cfe434a79c71-kube-api-access-mxxp2" (OuterVolumeSpecName: "kube-api-access-mxxp2") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "kube-api-access-mxxp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.731443 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" event={"ID":"61cc010a-4cd7-4938-b38e-4af19ead4e50","Type":"ContainerStarted","Data":"54309f6d8d925d78704a687a461ac7f453ddb2506ca5be8a3b7581885a9003bd"} Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.735717 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-b87795674-d968s" event={"ID":"667072f7-1d8a-4f67-87bb-f587f6384ffd","Type":"ContainerStarted","Data":"0cb2364410b3ab385dc6bf703cf4d29d2158cf2fe4cdb13e5890e976d50478b2"} Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.735761 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-b87795674-d968s" event={"ID":"667072f7-1d8a-4f67-87bb-f587f6384ffd","Type":"ContainerStarted","Data":"26cf433eb8b1598ae34fa095317bfb320f9b917119c924273a153c8509981a77"} Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.735959 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.746784 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cd988bd5d-l6ddg" event={"ID":"95929173-9929-400b-be3f-2fee62cbab3d","Type":"ContainerStarted","Data":"abdf17a8e0aef913baccbb1cde15b5eeb67d6233b2be590e7f03549028da4175"} Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.763649 4687 generic.go:334] "Generic (PLEG): container finished" podID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerID="7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194" exitCode=0 Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.763700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" event={"ID":"402d753d-5ffe-4835-9d73-cfe434a79c71","Type":"ContainerDied","Data":"7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194"} Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.763729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" event={"ID":"402d753d-5ffe-4835-9d73-cfe434a79c71","Type":"ContainerDied","Data":"5ebdbc258b950df8d95f668c49d3dabb556115261c65d65b9c4fb25d838ca4f7"} Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.763749 4687 scope.go:117] "RemoveContainer" containerID="7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.763905 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-nhcst" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.774329 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.780109 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.780143 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxp2\" (UniqueName: \"kubernetes.io/projected/402d753d-5ffe-4835-9d73-cfe434a79c71-kube-api-access-mxxp2\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.785897 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.795485 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.798072 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.876895 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.892603 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.892637 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.892647 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.893846 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-b87795674-d968s" podStartSLOduration=2.893817522 podStartE2EDuration="2.893817522s" podCreationTimestamp="2026-03-12 16:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:29:32.757095736 +0000 UTC m=+1621.721058070" watchObservedRunningTime="2026-03-12 16:29:32.893817522 +0000 UTC m=+1621.857779866" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.976537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.985619 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-config" (OuterVolumeSpecName: "config") pod "402d753d-5ffe-4835-9d73-cfe434a79c71" (UID: "402d753d-5ffe-4835-9d73-cfe434a79c71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.995124 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:32 crc kubenswrapper[4687]: I0312 16:29:32.995155 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d753d-5ffe-4835-9d73-cfe434a79c71-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:33 crc kubenswrapper[4687]: I0312 16:29:33.147069 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nhcst"] Mar 12 16:29:33 crc kubenswrapper[4687]: I0312 16:29:33.161039 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-nhcst"] Mar 12 16:29:33 crc kubenswrapper[4687]: I0312 16:29:33.589557 4687 scope.go:117] "RemoveContainer" containerID="af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e" Mar 12 16:29:33 crc kubenswrapper[4687]: I0312 16:29:33.745988 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" path="/var/lib/kubelet/pods/402d753d-5ffe-4835-9d73-cfe434a79c71/volumes" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.051055 4687 scope.go:117] "RemoveContainer" containerID="7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194" Mar 12 16:29:34 crc kubenswrapper[4687]: E0312 16:29:34.051654 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194\": container with ID starting with 7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194 not found: ID does not exist" containerID="7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.051694 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194"} err="failed to get container status \"7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194\": rpc error: code = NotFound desc = could not find container \"7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194\": container with ID starting with 7308c2db9cd65615175e46fd03bc46e2ee71f8fb014df7a7a0d8c99947df0194 not found: ID does not exist" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.051721 4687 scope.go:117] "RemoveContainer" containerID="af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e" Mar 12 16:29:34 crc kubenswrapper[4687]: E0312 16:29:34.052130 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e\": container with ID starting with af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e not found: ID does not exist" containerID="af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.052173 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e"} err="failed to get container status \"af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e\": rpc error: code = NotFound desc = could not find container \"af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e\": container with ID starting with af95d0b84025e72cb6d96dd0de3c6688e09e50e93b8fef6cb471d305f58e476e not found: ID does not exist" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.799538 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerStarted","Data":"2aa27c071ee14d55ab24cea39beeae927da8dff3e369144a2731dff4c1cfbb59"} Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.806298 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cd988bd5d-l6ddg" event={"ID":"95929173-9929-400b-be3f-2fee62cbab3d","Type":"ContainerStarted","Data":"c75ceb8940412fe6d9958b6841b17f1866b9648f7c7edf457efd0169dfae74ec"} Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.806527 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.823718 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" event={"ID":"61cc010a-4cd7-4938-b38e-4af19ead4e50","Type":"ContainerStarted","Data":"c5a02868a28c2ec252ad69314e1895e02c9a7f073f296c31815afcac05fba5c4"} Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.824843 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.825221 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.62854805 podStartE2EDuration="42.825211041s" podCreationTimestamp="2026-03-12 16:28:52 +0000 UTC" firstStartedPulling="2026-03-12 16:28:53.899997228 +0000 UTC m=+1582.863959572" lastFinishedPulling="2026-03-12 16:29:34.096660219 +0000 UTC m=+1623.060622563" observedRunningTime="2026-03-12 16:29:34.823223546 +0000 UTC m=+1623.787185890" watchObservedRunningTime="2026-03-12 16:29:34.825211041 +0000 UTC m=+1623.789173385" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.870310 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6cd988bd5d-l6ddg" podStartSLOduration=2.749058706 podStartE2EDuration="4.870289856s" podCreationTimestamp="2026-03-12 16:29:30 +0000 UTC" firstStartedPulling="2026-03-12 16:29:31.940230614 +0000 UTC m=+1620.904192958" lastFinishedPulling="2026-03-12 16:29:34.061461764 +0000 UTC m=+1623.025424108" observedRunningTime="2026-03-12 16:29:34.842820713 +0000 UTC m=+1623.806783057" watchObservedRunningTime="2026-03-12 16:29:34.870289856 +0000 UTC m=+1623.834252200" Mar 12 16:29:34 crc kubenswrapper[4687]: I0312 16:29:34.890382 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" podStartSLOduration=2.723257759 podStartE2EDuration="4.890348035s" podCreationTimestamp="2026-03-12 16:29:30 +0000 UTC" firstStartedPulling="2026-03-12 16:29:31.890483882 +0000 UTC m=+1620.854446226" lastFinishedPulling="2026-03-12 16:29:34.057574158 +0000 UTC m=+1623.021536502" observedRunningTime="2026-03-12 16:29:34.88540654 +0000 UTC m=+1623.849368904" watchObservedRunningTime="2026-03-12 16:29:34.890348035 +0000 UTC m=+1623.854310379" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.251884 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm"] Mar 12 16:29:41 crc kubenswrapper[4687]: E0312 16:29:41.252910 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerName="init" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.252927 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerName="init" Mar 12 16:29:41 crc kubenswrapper[4687]: E0312 16:29:41.252952 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerName="dnsmasq-dns" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.252960 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerName="dnsmasq-dns" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.253259 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="402d753d-5ffe-4835-9d73-cfe434a79c71" containerName="dnsmasq-dns" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.254567 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.260838 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.261167 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.261423 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.261570 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.269974 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm"] Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.394493 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.394555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmq9n\" (UniqueName: \"kubernetes.io/projected/e4c32507-463c-4dbc-887f-41d1713ac4c6-kube-api-access-hmq9n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.394612 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.394723 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.496771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.496818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmq9n\" (UniqueName: \"kubernetes.io/projected/e4c32507-463c-4dbc-887f-41d1713ac4c6-kube-api-access-hmq9n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.496862 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.496965 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.502995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.503736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.505820 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.529072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmq9n\" (UniqueName: \"kubernetes.io/projected/e4c32507-463c-4dbc-887f-41d1713ac4c6-kube-api-access-hmq9n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:41 crc kubenswrapper[4687]: I0312 16:29:41.590928 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:29:42 crc kubenswrapper[4687]: I0312 16:29:42.701731 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm"] Mar 12 16:29:42 crc kubenswrapper[4687]: W0312 16:29:42.705953 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c32507_463c_4dbc_887f_41d1713ac4c6.slice/crio-8fa08ba344d75bcae945890f5e8657dfe33af50e22a98b0d40a030fbde78838c WatchSource:0}: Error finding container 8fa08ba344d75bcae945890f5e8657dfe33af50e22a98b0d40a030fbde78838c: Status 404 returned error can't find the container with id 8fa08ba344d75bcae945890f5e8657dfe33af50e22a98b0d40a030fbde78838c Mar 12 16:29:42 crc kubenswrapper[4687]: I0312 16:29:42.947055 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6cd988bd5d-l6ddg" Mar 12 16:29:42 crc kubenswrapper[4687]: I0312 16:29:42.980618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" event={"ID":"e4c32507-463c-4dbc-887f-41d1713ac4c6","Type":"ContainerStarted","Data":"8fa08ba344d75bcae945890f5e8657dfe33af50e22a98b0d40a030fbde78838c"} Mar 12 16:29:43 crc kubenswrapper[4687]: I0312 16:29:43.092532 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67fdc58659-zfnqd"] Mar 12 16:29:43 crc kubenswrapper[4687]: I0312 16:29:43.092793 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-67fdc58659-zfnqd" podUID="c8a3074c-212f-414d-b52b-69c5f0a9071b" containerName="heat-api" containerID="cri-o://37094414c653a39bc0bf7364b05192aec41fafa63815641831bae639a4daf91a" gracePeriod=60 Mar 12 16:29:43 crc kubenswrapper[4687]: I0312 16:29:43.734122 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:29:43 crc kubenswrapper[4687]: E0312 16:29:43.735061 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:29:43 crc kubenswrapper[4687]: I0312 16:29:43.988199 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-76878d96fd-ck8qd" Mar 12 16:29:44 crc kubenswrapper[4687]: I0312 16:29:44.074447 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f74959f89-ptk6f"] Mar 12 16:29:44 crc kubenswrapper[4687]: I0312 16:29:44.076041 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" podUID="f99b8239-d236-4756-ab84-03b17e79ce5b" containerName="heat-cfnapi" containerID="cri-o://920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb" gracePeriod=60 Mar 12 16:29:46 crc kubenswrapper[4687]: I0312 16:29:46.289410 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-67fdc58659-zfnqd" podUID="c8a3074c-212f-414d-b52b-69c5f0a9071b" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.225:8004/healthcheck\": dial tcp 10.217.0.225:8004: connect: connection refused" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.030963 4687 generic.go:334] "Generic (PLEG): container finished" podID="c8a3074c-212f-414d-b52b-69c5f0a9071b" containerID="37094414c653a39bc0bf7364b05192aec41fafa63815641831bae639a4daf91a" exitCode=0 Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.031017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67fdc58659-zfnqd" event={"ID":"c8a3074c-212f-414d-b52b-69c5f0a9071b","Type":"ContainerDied","Data":"37094414c653a39bc0bf7364b05192aec41fafa63815641831bae639a4daf91a"} Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.208100 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.254450 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" podUID="f99b8239-d236-4756-ab84-03b17e79ce5b" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.226:8000/healthcheck\": read tcp 10.217.0.2:40886->10.217.0.226:8000: read: connection reset by peer" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.394859 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-internal-tls-certs\") pod \"c8a3074c-212f-414d-b52b-69c5f0a9071b\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.395810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-public-tls-certs\") pod \"c8a3074c-212f-414d-b52b-69c5f0a9071b\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.395965 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-combined-ca-bundle\") pod \"c8a3074c-212f-414d-b52b-69c5f0a9071b\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.396085 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data-custom\") pod \"c8a3074c-212f-414d-b52b-69c5f0a9071b\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.396133 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcvpc\" (UniqueName: \"kubernetes.io/projected/c8a3074c-212f-414d-b52b-69c5f0a9071b-kube-api-access-qcvpc\") pod \"c8a3074c-212f-414d-b52b-69c5f0a9071b\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.396158 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data\") pod \"c8a3074c-212f-414d-b52b-69c5f0a9071b\" (UID: \"c8a3074c-212f-414d-b52b-69c5f0a9071b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.404549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a3074c-212f-414d-b52b-69c5f0a9071b-kube-api-access-qcvpc" (OuterVolumeSpecName: "kube-api-access-qcvpc") pod "c8a3074c-212f-414d-b52b-69c5f0a9071b" (UID: "c8a3074c-212f-414d-b52b-69c5f0a9071b"). InnerVolumeSpecName "kube-api-access-qcvpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.419617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8a3074c-212f-414d-b52b-69c5f0a9071b" (UID: "c8a3074c-212f-414d-b52b-69c5f0a9071b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.452526 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8a3074c-212f-414d-b52b-69c5f0a9071b" (UID: "c8a3074c-212f-414d-b52b-69c5f0a9071b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.480636 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c8a3074c-212f-414d-b52b-69c5f0a9071b" (UID: "c8a3074c-212f-414d-b52b-69c5f0a9071b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.485878 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8a3074c-212f-414d-b52b-69c5f0a9071b" (UID: "c8a3074c-212f-414d-b52b-69c5f0a9071b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.502390 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.502423 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.502435 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcvpc\" (UniqueName: \"kubernetes.io/projected/c8a3074c-212f-414d-b52b-69c5f0a9071b-kube-api-access-qcvpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.502445 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.502453 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.509391 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data" (OuterVolumeSpecName: "config-data") pod "c8a3074c-212f-414d-b52b-69c5f0a9071b" (UID: "c8a3074c-212f-414d-b52b-69c5f0a9071b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.604373 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a3074c-212f-414d-b52b-69c5f0a9071b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.748991 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.911299 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-internal-tls-certs\") pod \"f99b8239-d236-4756-ab84-03b17e79ce5b\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.911399 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm866\" (UniqueName: \"kubernetes.io/projected/f99b8239-d236-4756-ab84-03b17e79ce5b-kube-api-access-zm866\") pod \"f99b8239-d236-4756-ab84-03b17e79ce5b\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.911465 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data\") pod \"f99b8239-d236-4756-ab84-03b17e79ce5b\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.911482 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data-custom\") pod \"f99b8239-d236-4756-ab84-03b17e79ce5b\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.911545 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-combined-ca-bundle\") pod \"f99b8239-d236-4756-ab84-03b17e79ce5b\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.911646 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-public-tls-certs\") pod \"f99b8239-d236-4756-ab84-03b17e79ce5b\" (UID: \"f99b8239-d236-4756-ab84-03b17e79ce5b\") " Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.916016 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f99b8239-d236-4756-ab84-03b17e79ce5b" (UID: "f99b8239-d236-4756-ab84-03b17e79ce5b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.917920 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99b8239-d236-4756-ab84-03b17e79ce5b-kube-api-access-zm866" (OuterVolumeSpecName: "kube-api-access-zm866") pod "f99b8239-d236-4756-ab84-03b17e79ce5b" (UID: "f99b8239-d236-4756-ab84-03b17e79ce5b"). InnerVolumeSpecName "kube-api-access-zm866". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.950782 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f99b8239-d236-4756-ab84-03b17e79ce5b" (UID: "f99b8239-d236-4756-ab84-03b17e79ce5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.986913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f99b8239-d236-4756-ab84-03b17e79ce5b" (UID: "f99b8239-d236-4756-ab84-03b17e79ce5b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.987593 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f99b8239-d236-4756-ab84-03b17e79ce5b" (UID: "f99b8239-d236-4756-ab84-03b17e79ce5b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:47 crc kubenswrapper[4687]: I0312 16:29:47.987621 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data" (OuterVolumeSpecName: "config-data") pod "f99b8239-d236-4756-ab84-03b17e79ce5b" (UID: "f99b8239-d236-4756-ab84-03b17e79ce5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.019538 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm866\" (UniqueName: \"kubernetes.io/projected/f99b8239-d236-4756-ab84-03b17e79ce5b-kube-api-access-zm866\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.019580 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.019595 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.019606 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.019613 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.019622 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f99b8239-d236-4756-ab84-03b17e79ce5b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.062317 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67fdc58659-zfnqd" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.063284 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67fdc58659-zfnqd" event={"ID":"c8a3074c-212f-414d-b52b-69c5f0a9071b","Type":"ContainerDied","Data":"9ba3024609fa09a241830dea05903c13664f8a61264d99c75611fd0d72e370f9"} Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.063351 4687 scope.go:117] "RemoveContainer" containerID="37094414c653a39bc0bf7364b05192aec41fafa63815641831bae639a4daf91a" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.068313 4687 generic.go:334] "Generic (PLEG): container finished" podID="c4d17b17-5362-40cb-ab0d-d39d96702d69" containerID="e22c0caa3dcca5cfcf4fe8a3b02481194f2086d37e6f4f5874bb6cd88299d9a2" exitCode=0 Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.068424 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4d17b17-5362-40cb-ab0d-d39d96702d69","Type":"ContainerDied","Data":"e22c0caa3dcca5cfcf4fe8a3b02481194f2086d37e6f4f5874bb6cd88299d9a2"} Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.072517 4687 generic.go:334] "Generic (PLEG): container finished" podID="f99b8239-d236-4756-ab84-03b17e79ce5b" containerID="920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb" exitCode=0 Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.072566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" event={"ID":"f99b8239-d236-4756-ab84-03b17e79ce5b","Type":"ContainerDied","Data":"920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb"} Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.072625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" event={"ID":"f99b8239-d236-4756-ab84-03b17e79ce5b","Type":"ContainerDied","Data":"9dc29de531d68242bace6181cad0742388b450bbf9ad0e85191a549b128ecb0e"} Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.072717 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f74959f89-ptk6f" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.198083 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f74959f89-ptk6f"] Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.228730 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7f74959f89-ptk6f"] Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.247547 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67fdc58659-zfnqd"] Mar 12 16:29:48 crc kubenswrapper[4687]: E0312 16:29:48.263807 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d17b17_5362_40cb_ab0d_d39d96702d69.slice/crio-e22c0caa3dcca5cfcf4fe8a3b02481194f2086d37e6f4f5874bb6cd88299d9a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf99b8239_d236_4756_ab84_03b17e79ce5b.slice/crio-9dc29de531d68242bace6181cad0742388b450bbf9ad0e85191a549b128ecb0e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf99b8239_d236_4756_ab84_03b17e79ce5b.slice\": RecentStats: unable to find data in memory cache]" Mar 12 16:29:48 crc kubenswrapper[4687]: E0312 16:29:48.264473 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf99b8239_d236_4756_ab84_03b17e79ce5b.slice/crio-9dc29de531d68242bace6181cad0742388b450bbf9ad0e85191a549b128ecb0e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf99b8239_d236_4756_ab84_03b17e79ce5b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d17b17_5362_40cb_ab0d_d39d96702d69.slice/crio-e22c0caa3dcca5cfcf4fe8a3b02481194f2086d37e6f4f5874bb6cd88299d9a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d17b17_5362_40cb_ab0d_d39d96702d69.slice/crio-conmon-e22c0caa3dcca5cfcf4fe8a3b02481194f2086d37e6f4f5874bb6cd88299d9a2.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:29:48 crc kubenswrapper[4687]: I0312 16:29:48.268453 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-67fdc58659-zfnqd"] Mar 12 16:29:49 crc kubenswrapper[4687]: I0312 16:29:49.091693 4687 generic.go:334] "Generic (PLEG): container finished" podID="457918a9-182a-4e9e-b03f-ab58128edc95" containerID="04bab90ecf798692ca52e96f52f923bd48117b5d247574efff997be0dad6ffb5" exitCode=0 Mar 12 16:29:49 crc kubenswrapper[4687]: I0312 16:29:49.091737 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"457918a9-182a-4e9e-b03f-ab58128edc95","Type":"ContainerDied","Data":"04bab90ecf798692ca52e96f52f923bd48117b5d247574efff997be0dad6ffb5"} Mar 12 16:29:49 crc kubenswrapper[4687]: I0312 16:29:49.746625 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a3074c-212f-414d-b52b-69c5f0a9071b" path="/var/lib/kubelet/pods/c8a3074c-212f-414d-b52b-69c5f0a9071b/volumes" Mar 12 16:29:49 crc kubenswrapper[4687]: I0312 16:29:49.747586 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99b8239-d236-4756-ab84-03b17e79ce5b" path="/var/lib/kubelet/pods/f99b8239-d236-4756-ab84-03b17e79ce5b/volumes" Mar 12 16:29:51 crc kubenswrapper[4687]: I0312 16:29:51.236671 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-b87795674-d968s" Mar 12 16:29:51 crc kubenswrapper[4687]: I0312 16:29:51.289980 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-765f6bdcdf-s57mt"] Mar 12 16:29:51 crc kubenswrapper[4687]: I0312 16:29:51.290184 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-765f6bdcdf-s57mt" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerName="heat-engine" containerID="cri-o://7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" gracePeriod=60 Mar 12 16:29:54 crc kubenswrapper[4687]: E0312 16:29:54.770172 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:29:54 crc kubenswrapper[4687]: E0312 16:29:54.771841 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:29:54 crc kubenswrapper[4687]: E0312 16:29:54.773126 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:29:54 crc kubenswrapper[4687]: E0312 16:29:54.773167 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-765f6bdcdf-s57mt" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerName="heat-engine" Mar 12 16:29:55 crc kubenswrapper[4687]: I0312 16:29:55.733040 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:29:55 crc kubenswrapper[4687]: E0312 16:29:55.733637 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:29:56 crc kubenswrapper[4687]: I0312 16:29:56.033470 4687 scope.go:117] "RemoveContainer" containerID="920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb" Mar 12 16:29:56 crc kubenswrapper[4687]: I0312 16:29:56.340507 4687 scope.go:117] "RemoveContainer" containerID="920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb" Mar 12 16:29:56 crc kubenswrapper[4687]: E0312 16:29:56.343704 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb\": container with ID starting with 920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb not found: ID does not exist" containerID="920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb" Mar 12 16:29:56 crc kubenswrapper[4687]: I0312 16:29:56.343745 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb"} err="failed to get container status \"920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb\": rpc error: code = NotFound desc = could not find container \"920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb\": container with ID starting with 920520e4c41435d460094540a35b160dfa4ddd977baaaf366efe6f40d52ebeeb not found: ID does not exist" Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.209094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"457918a9-182a-4e9e-b03f-ab58128edc95","Type":"ContainerStarted","Data":"19182d635134916052d59fd8cd69fb913fefb6a53cef54d953a8a656ced03afb"} Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.210012 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.212017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" event={"ID":"e4c32507-463c-4dbc-887f-41d1713ac4c6","Type":"ContainerStarted","Data":"3e2126bebf33f8673629285b4046bd65028357627205ac6cfbcc3b5fabfb47d0"} Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.213600 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4d17b17-5362-40cb-ab0d-d39d96702d69","Type":"ContainerStarted","Data":"73fb9b9207fcc02cad07fbccbe6db3be6176398d5313f7760aca7025fc676ff4"} Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.214604 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.256332 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=44.256313272 podStartE2EDuration="44.256313272s" podCreationTimestamp="2026-03-12 16:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:29:57.253214207 +0000 UTC m=+1646.217176551" watchObservedRunningTime="2026-03-12 16:29:57.256313272 +0000 UTC m=+1646.220275616" Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.282903 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" podStartSLOduration=2.8729451089999998 podStartE2EDuration="16.282210962s" podCreationTimestamp="2026-03-12 16:29:41 +0000 UTC" firstStartedPulling="2026-03-12 16:29:42.708870935 +0000 UTC m=+1631.672833289" lastFinishedPulling="2026-03-12 16:29:56.118136798 +0000 UTC m=+1645.082099142" observedRunningTime="2026-03-12 16:29:57.277139763 +0000 UTC m=+1646.241102107" watchObservedRunningTime="2026-03-12 16:29:57.282210962 +0000 UTC m=+1646.246173306" Mar 12 16:29:57 crc kubenswrapper[4687]: I0312 16:29:57.324798 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.324784218 podStartE2EDuration="45.324784218s" podCreationTimestamp="2026-03-12 16:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:29:57.322672771 +0000 UTC m=+1646.286635115" watchObservedRunningTime="2026-03-12 16:29:57.324784218 +0000 UTC m=+1646.288746562" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.828810 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-wblmk"] Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.843872 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-wblmk"] Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.908610 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-49t6h"] Mar 12 16:29:58 crc kubenswrapper[4687]: E0312 16:29:58.909247 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a3074c-212f-414d-b52b-69c5f0a9071b" containerName="heat-api" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.909273 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a3074c-212f-414d-b52b-69c5f0a9071b" containerName="heat-api" Mar 12 16:29:58 crc kubenswrapper[4687]: E0312 16:29:58.909307 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99b8239-d236-4756-ab84-03b17e79ce5b" containerName="heat-cfnapi" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.909317 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99b8239-d236-4756-ab84-03b17e79ce5b" containerName="heat-cfnapi" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.909620 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99b8239-d236-4756-ab84-03b17e79ce5b" containerName="heat-cfnapi" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.909655 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a3074c-212f-414d-b52b-69c5f0a9071b" containerName="heat-api" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.910811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.915583 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 16:29:58 crc kubenswrapper[4687]: I0312 16:29:58.918850 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-49t6h"] Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.019543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-config-data\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.019595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5l4\" (UniqueName: \"kubernetes.io/projected/e192db3c-31bd-4855-bebd-d30764c224cf-kube-api-access-8f5l4\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.019743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-combined-ca-bundle\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.020531 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-scripts\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.122412 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-scripts\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.122487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-config-data\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.122511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5l4\" (UniqueName: \"kubernetes.io/projected/e192db3c-31bd-4855-bebd-d30764c224cf-kube-api-access-8f5l4\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.122545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-combined-ca-bundle\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.132285 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-config-data\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.139184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-combined-ca-bundle\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.156536 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-scripts\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.174934 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5l4\" (UniqueName: \"kubernetes.io/projected/e192db3c-31bd-4855-bebd-d30764c224cf-kube-api-access-8f5l4\") pod \"aodh-db-sync-49t6h\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.246958 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-49t6h" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.745878 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9201a49a-2a15-400d-ab4f-20f55725f719" path="/var/lib/kubelet/pods/9201a49a-2a15-400d-ab4f-20f55725f719/volumes" Mar 12 16:29:59 crc kubenswrapper[4687]: I0312 16:29:59.816451 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-49t6h"] Mar 12 16:29:59 crc kubenswrapper[4687]: W0312 16:29:59.825807 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode192db3c_31bd_4855_bebd_d30764c224cf.slice/crio-6e60abd938e1100b17f7aaa2cd39747b2512d28badf53e879fecd55cb82d6e8c WatchSource:0}: Error finding container 6e60abd938e1100b17f7aaa2cd39747b2512d28badf53e879fecd55cb82d6e8c: Status 404 returned error can't find the container with id 6e60abd938e1100b17f7aaa2cd39747b2512d28badf53e879fecd55cb82d6e8c Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.143193 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555550-hbp7f"] Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.146647 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.158554 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.158777 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.158907 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.160971 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555550-hbp7f"] Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.252581 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm"] Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.254477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.256345 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm4ch\" (UniqueName: \"kubernetes.io/projected/0b9a447d-54f6-42cc-bd25-42894645e1cf-kube-api-access-zm4ch\") pod \"auto-csr-approver-29555550-hbp7f\" (UID: \"0b9a447d-54f6-42cc-bd25-42894645e1cf\") " pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.264758 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.264950 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.269037 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm"] Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.279743 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-49t6h" event={"ID":"e192db3c-31bd-4855-bebd-d30764c224cf","Type":"ContainerStarted","Data":"6e60abd938e1100b17f7aaa2cd39747b2512d28badf53e879fecd55cb82d6e8c"} Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.358605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm4ch\" (UniqueName: \"kubernetes.io/projected/0b9a447d-54f6-42cc-bd25-42894645e1cf-kube-api-access-zm4ch\") pod \"auto-csr-approver-29555550-hbp7f\" (UID: \"0b9a447d-54f6-42cc-bd25-42894645e1cf\") " pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.358693 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60ca01e6-2aca-4d80-939e-3680dbf9f60f-config-volume\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.358871 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60ca01e6-2aca-4d80-939e-3680dbf9f60f-secret-volume\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.358976 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qjx\" (UniqueName: \"kubernetes.io/projected/60ca01e6-2aca-4d80-939e-3680dbf9f60f-kube-api-access-m6qjx\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.379837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm4ch\" (UniqueName: \"kubernetes.io/projected/0b9a447d-54f6-42cc-bd25-42894645e1cf-kube-api-access-zm4ch\") pod \"auto-csr-approver-29555550-hbp7f\" (UID: \"0b9a447d-54f6-42cc-bd25-42894645e1cf\") " pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.462990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60ca01e6-2aca-4d80-939e-3680dbf9f60f-secret-volume\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.463280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6qjx\" (UniqueName: \"kubernetes.io/projected/60ca01e6-2aca-4d80-939e-3680dbf9f60f-kube-api-access-m6qjx\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.463549 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60ca01e6-2aca-4d80-939e-3680dbf9f60f-config-volume\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.464307 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60ca01e6-2aca-4d80-939e-3680dbf9f60f-config-volume\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.467572 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60ca01e6-2aca-4d80-939e-3680dbf9f60f-secret-volume\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.486519 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.486672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6qjx\" (UniqueName: \"kubernetes.io/projected/60ca01e6-2aca-4d80-939e-3680dbf9f60f-kube-api-access-m6qjx\") pod \"collect-profiles-29555550-9vgbm\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:00 crc kubenswrapper[4687]: I0312 16:30:00.578205 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:01 crc kubenswrapper[4687]: W0312 16:30:01.125601 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b9a447d_54f6_42cc_bd25_42894645e1cf.slice/crio-f5813823418157f010c2aef4a9bb442a6438a6c8bc8ef2bf7bff3ad95c24ba52 WatchSource:0}: Error finding container f5813823418157f010c2aef4a9bb442a6438a6c8bc8ef2bf7bff3ad95c24ba52: Status 404 returned error can't find the container with id f5813823418157f010c2aef4a9bb442a6438a6c8bc8ef2bf7bff3ad95c24ba52 Mar 12 16:30:01 crc kubenswrapper[4687]: I0312 16:30:01.126490 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555550-hbp7f"] Mar 12 16:30:01 crc kubenswrapper[4687]: I0312 16:30:01.226756 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm"] Mar 12 16:30:01 crc kubenswrapper[4687]: I0312 16:30:01.311171 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" event={"ID":"60ca01e6-2aca-4d80-939e-3680dbf9f60f","Type":"ContainerStarted","Data":"b3f6eeb50aa1d706d80e4c602f759f09e3030a201be3f4c6bdb76279ed1a059c"} Mar 12 16:30:01 crc kubenswrapper[4687]: I0312 16:30:01.313070 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" event={"ID":"0b9a447d-54f6-42cc-bd25-42894645e1cf","Type":"ContainerStarted","Data":"f5813823418157f010c2aef4a9bb442a6438a6c8bc8ef2bf7bff3ad95c24ba52"} Mar 12 16:30:02 crc kubenswrapper[4687]: I0312 16:30:02.329462 4687 generic.go:334] "Generic (PLEG): container finished" podID="60ca01e6-2aca-4d80-939e-3680dbf9f60f" containerID="c09855d7a8b13b9a11777b46341dbb2f58db9f6db7e8516c8acc994c8927a86e" exitCode=0 Mar 12 16:30:02 crc kubenswrapper[4687]: I0312 16:30:02.329764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" event={"ID":"60ca01e6-2aca-4d80-939e-3680dbf9f60f","Type":"ContainerDied","Data":"c09855d7a8b13b9a11777b46341dbb2f58db9f6db7e8516c8acc994c8927a86e"} Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.368160 4687 generic.go:334] "Generic (PLEG): container finished" podID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" exitCode=0 Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.368675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-765f6bdcdf-s57mt" event={"ID":"481ee492-9331-4038-a70b-2fc4eddfb60f","Type":"ContainerDied","Data":"7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0"} Mar 12 16:30:04 crc kubenswrapper[4687]: E0312 16:30:04.768436 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0 is running failed: container process not found" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:30:04 crc kubenswrapper[4687]: E0312 16:30:04.768908 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0 is running failed: container process not found" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:30:04 crc kubenswrapper[4687]: E0312 16:30:04.769182 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0 is running failed: container process not found" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 12 16:30:04 crc kubenswrapper[4687]: E0312 16:30:04.769263 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-765f6bdcdf-s57mt" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerName="heat-engine" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.793243 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.893952 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60ca01e6-2aca-4d80-939e-3680dbf9f60f-secret-volume\") pod \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.894058 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60ca01e6-2aca-4d80-939e-3680dbf9f60f-config-volume\") pod \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.894439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6qjx\" (UniqueName: \"kubernetes.io/projected/60ca01e6-2aca-4d80-939e-3680dbf9f60f-kube-api-access-m6qjx\") pod \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\" (UID: \"60ca01e6-2aca-4d80-939e-3680dbf9f60f\") " Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.894873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ca01e6-2aca-4d80-939e-3680dbf9f60f-config-volume" (OuterVolumeSpecName: "config-volume") pod "60ca01e6-2aca-4d80-939e-3680dbf9f60f" (UID: "60ca01e6-2aca-4d80-939e-3680dbf9f60f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.895286 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60ca01e6-2aca-4d80-939e-3680dbf9f60f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.900011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ca01e6-2aca-4d80-939e-3680dbf9f60f-kube-api-access-m6qjx" (OuterVolumeSpecName: "kube-api-access-m6qjx") pod "60ca01e6-2aca-4d80-939e-3680dbf9f60f" (UID: "60ca01e6-2aca-4d80-939e-3680dbf9f60f"). InnerVolumeSpecName "kube-api-access-m6qjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.915563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ca01e6-2aca-4d80-939e-3680dbf9f60f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60ca01e6-2aca-4d80-939e-3680dbf9f60f" (UID: "60ca01e6-2aca-4d80-939e-3680dbf9f60f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.998282 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6qjx\" (UniqueName: \"kubernetes.io/projected/60ca01e6-2aca-4d80-939e-3680dbf9f60f-kube-api-access-m6qjx\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:04 crc kubenswrapper[4687]: I0312 16:30:04.998695 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60ca01e6-2aca-4d80-939e-3680dbf9f60f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:05 crc kubenswrapper[4687]: I0312 16:30:05.390700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" event={"ID":"60ca01e6-2aca-4d80-939e-3680dbf9f60f","Type":"ContainerDied","Data":"b3f6eeb50aa1d706d80e4c602f759f09e3030a201be3f4c6bdb76279ed1a059c"} Mar 12 16:30:05 crc kubenswrapper[4687]: I0312 16:30:05.390745 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f6eeb50aa1d706d80e4c602f759f09e3030a201be3f4c6bdb76279ed1a059c" Mar 12 16:30:05 crc kubenswrapper[4687]: I0312 16:30:05.390810 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm" Mar 12 16:30:05 crc kubenswrapper[4687]: I0312 16:30:05.867840 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.022918 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6p8l\" (UniqueName: \"kubernetes.io/projected/481ee492-9331-4038-a70b-2fc4eddfb60f-kube-api-access-x6p8l\") pod \"481ee492-9331-4038-a70b-2fc4eddfb60f\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.023260 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data-custom\") pod \"481ee492-9331-4038-a70b-2fc4eddfb60f\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.023312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-combined-ca-bundle\") pod \"481ee492-9331-4038-a70b-2fc4eddfb60f\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.023608 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data\") pod \"481ee492-9331-4038-a70b-2fc4eddfb60f\" (UID: \"481ee492-9331-4038-a70b-2fc4eddfb60f\") " Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.029578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "481ee492-9331-4038-a70b-2fc4eddfb60f" (UID: "481ee492-9331-4038-a70b-2fc4eddfb60f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.029779 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481ee492-9331-4038-a70b-2fc4eddfb60f-kube-api-access-x6p8l" (OuterVolumeSpecName: "kube-api-access-x6p8l") pod "481ee492-9331-4038-a70b-2fc4eddfb60f" (UID: "481ee492-9331-4038-a70b-2fc4eddfb60f"). InnerVolumeSpecName "kube-api-access-x6p8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.056958 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "481ee492-9331-4038-a70b-2fc4eddfb60f" (UID: "481ee492-9331-4038-a70b-2fc4eddfb60f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.099260 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data" (OuterVolumeSpecName: "config-data") pod "481ee492-9331-4038-a70b-2fc4eddfb60f" (UID: "481ee492-9331-4038-a70b-2fc4eddfb60f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.126201 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.126234 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.126243 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6p8l\" (UniqueName: \"kubernetes.io/projected/481ee492-9331-4038-a70b-2fc4eddfb60f-kube-api-access-x6p8l\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.126254 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/481ee492-9331-4038-a70b-2fc4eddfb60f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.402841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-49t6h" event={"ID":"e192db3c-31bd-4855-bebd-d30764c224cf","Type":"ContainerStarted","Data":"11254152a94aef76f5d06793656a243b91da8e91fbf720c7b7c5dc14c7a34e4e"} Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.406982 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-765f6bdcdf-s57mt" event={"ID":"481ee492-9331-4038-a70b-2fc4eddfb60f","Type":"ContainerDied","Data":"521ebf0afa93ca94e3a8b3b2ae58690311d700e4afb0e224d7f5c601e39004f4"} Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.407024 4687 scope.go:117] "RemoveContainer" containerID="7052aa4218a2cc6e551c077fdab071bb31b853ea12d2a1d1e0da5cce123e32e0" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.407139 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-765f6bdcdf-s57mt" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.410726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" event={"ID":"0b9a447d-54f6-42cc-bd25-42894645e1cf","Type":"ContainerStarted","Data":"38804c602783559eacfc856aabc79a853c073183cd238006610cc1b433bfe2c8"} Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.419980 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-49t6h" podStartSLOduration=2.836824375 podStartE2EDuration="8.419962468s" podCreationTimestamp="2026-03-12 16:29:58 +0000 UTC" firstStartedPulling="2026-03-12 16:29:59.828261711 +0000 UTC m=+1648.792224075" lastFinishedPulling="2026-03-12 16:30:05.411399824 +0000 UTC m=+1654.375362168" observedRunningTime="2026-03-12 16:30:06.418996982 +0000 UTC m=+1655.382959336" watchObservedRunningTime="2026-03-12 16:30:06.419962468 +0000 UTC m=+1655.383924812" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.468993 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" podStartSLOduration=2.18668563 podStartE2EDuration="6.46897445s" podCreationTimestamp="2026-03-12 16:30:00 +0000 UTC" firstStartedPulling="2026-03-12 16:30:01.130954434 +0000 UTC m=+1650.094916768" lastFinishedPulling="2026-03-12 16:30:05.413243244 +0000 UTC m=+1654.377205588" observedRunningTime="2026-03-12 16:30:06.463459219 +0000 UTC m=+1655.427421573" watchObservedRunningTime="2026-03-12 16:30:06.46897445 +0000 UTC m=+1655.432936794" Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.488310 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-765f6bdcdf-s57mt"] Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.503398 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-765f6bdcdf-s57mt"] Mar 12 16:30:06 crc kubenswrapper[4687]: I0312 16:30:06.733742 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:30:06 crc kubenswrapper[4687]: E0312 16:30:06.734512 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:30:07 crc kubenswrapper[4687]: I0312 16:30:07.426472 4687 generic.go:334] "Generic (PLEG): container finished" podID="e4c32507-463c-4dbc-887f-41d1713ac4c6" containerID="3e2126bebf33f8673629285b4046bd65028357627205ac6cfbcc3b5fabfb47d0" exitCode=0 Mar 12 16:30:07 crc kubenswrapper[4687]: I0312 16:30:07.426562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" event={"ID":"e4c32507-463c-4dbc-887f-41d1713ac4c6","Type":"ContainerDied","Data":"3e2126bebf33f8673629285b4046bd65028357627205ac6cfbcc3b5fabfb47d0"} Mar 12 16:30:07 crc kubenswrapper[4687]: I0312 16:30:07.756951 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" path="/var/lib/kubelet/pods/481ee492-9331-4038-a70b-2fc4eddfb60f/volumes" Mar 12 16:30:08 crc kubenswrapper[4687]: I0312 16:30:08.438889 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b9a447d-54f6-42cc-bd25-42894645e1cf" containerID="38804c602783559eacfc856aabc79a853c073183cd238006610cc1b433bfe2c8" exitCode=0 Mar 12 16:30:08 crc kubenswrapper[4687]: I0312 16:30:08.438966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" event={"ID":"0b9a447d-54f6-42cc-bd25-42894645e1cf","Type":"ContainerDied","Data":"38804c602783559eacfc856aabc79a853c073183cd238006610cc1b433bfe2c8"} Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.028525 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.108431 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-inventory\") pod \"e4c32507-463c-4dbc-887f-41d1713ac4c6\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.108627 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-ssh-key-openstack-edpm-ipam\") pod \"e4c32507-463c-4dbc-887f-41d1713ac4c6\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.108685 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-repo-setup-combined-ca-bundle\") pod \"e4c32507-463c-4dbc-887f-41d1713ac4c6\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.108917 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmq9n\" (UniqueName: \"kubernetes.io/projected/e4c32507-463c-4dbc-887f-41d1713ac4c6-kube-api-access-hmq9n\") pod \"e4c32507-463c-4dbc-887f-41d1713ac4c6\" (UID: \"e4c32507-463c-4dbc-887f-41d1713ac4c6\") " Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.113802 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e4c32507-463c-4dbc-887f-41d1713ac4c6" (UID: "e4c32507-463c-4dbc-887f-41d1713ac4c6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.126724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c32507-463c-4dbc-887f-41d1713ac4c6-kube-api-access-hmq9n" (OuterVolumeSpecName: "kube-api-access-hmq9n") pod "e4c32507-463c-4dbc-887f-41d1713ac4c6" (UID: "e4c32507-463c-4dbc-887f-41d1713ac4c6"). InnerVolumeSpecName "kube-api-access-hmq9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.140538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-inventory" (OuterVolumeSpecName: "inventory") pod "e4c32507-463c-4dbc-887f-41d1713ac4c6" (UID: "e4c32507-463c-4dbc-887f-41d1713ac4c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.160782 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4c32507-463c-4dbc-887f-41d1713ac4c6" (UID: "e4c32507-463c-4dbc-887f-41d1713ac4c6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.212475 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmq9n\" (UniqueName: \"kubernetes.io/projected/e4c32507-463c-4dbc-887f-41d1713ac4c6-kube-api-access-hmq9n\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.212507 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.212517 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.212526 4687 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c32507-463c-4dbc-887f-41d1713ac4c6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.458831 4687 generic.go:334] "Generic (PLEG): container finished" podID="e192db3c-31bd-4855-bebd-d30764c224cf" containerID="11254152a94aef76f5d06793656a243b91da8e91fbf720c7b7c5dc14c7a34e4e" exitCode=0 Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.460221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-49t6h" event={"ID":"e192db3c-31bd-4855-bebd-d30764c224cf","Type":"ContainerDied","Data":"11254152a94aef76f5d06793656a243b91da8e91fbf720c7b7c5dc14c7a34e4e"} Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.461504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" event={"ID":"e4c32507-463c-4dbc-887f-41d1713ac4c6","Type":"ContainerDied","Data":"8fa08ba344d75bcae945890f5e8657dfe33af50e22a98b0d40a030fbde78838c"} Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.463512 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa08ba344d75bcae945890f5e8657dfe33af50e22a98b0d40a030fbde78838c" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.461548 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.587576 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44"] Mar 12 16:30:09 crc kubenswrapper[4687]: E0312 16:30:09.588604 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c32507-463c-4dbc-887f-41d1713ac4c6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.588700 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c32507-463c-4dbc-887f-41d1713ac4c6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 16:30:09 crc kubenswrapper[4687]: E0312 16:30:09.588774 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ca01e6-2aca-4d80-939e-3680dbf9f60f" containerName="collect-profiles" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.588828 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ca01e6-2aca-4d80-939e-3680dbf9f60f" containerName="collect-profiles" Mar 12 16:30:09 crc kubenswrapper[4687]: E0312 16:30:09.588893 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerName="heat-engine" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.589014 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerName="heat-engine" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.589332 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="481ee492-9331-4038-a70b-2fc4eddfb60f" containerName="heat-engine" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.589467 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ca01e6-2aca-4d80-939e-3680dbf9f60f" containerName="collect-profiles" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.589533 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c32507-463c-4dbc-887f-41d1713ac4c6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.590433 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.599813 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.600268 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.600441 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.600589 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.601412 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44"] Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.726672 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fw4\" (UniqueName: \"kubernetes.io/projected/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-kube-api-access-s9fw4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.726787 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.726816 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.829653 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fw4\" (UniqueName: \"kubernetes.io/projected/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-kube-api-access-s9fw4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.829718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.829752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.835460 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.848459 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.863101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fw4\" (UniqueName: \"kubernetes.io/projected/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-kube-api-access-s9fw4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-cqt44\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:09 crc kubenswrapper[4687]: I0312 16:30:09.921769 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.080228 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.241491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm4ch\" (UniqueName: \"kubernetes.io/projected/0b9a447d-54f6-42cc-bd25-42894645e1cf-kube-api-access-zm4ch\") pod \"0b9a447d-54f6-42cc-bd25-42894645e1cf\" (UID: \"0b9a447d-54f6-42cc-bd25-42894645e1cf\") " Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.252601 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9a447d-54f6-42cc-bd25-42894645e1cf-kube-api-access-zm4ch" (OuterVolumeSpecName: "kube-api-access-zm4ch") pod "0b9a447d-54f6-42cc-bd25-42894645e1cf" (UID: "0b9a447d-54f6-42cc-bd25-42894645e1cf"). InnerVolumeSpecName "kube-api-access-zm4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.344731 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm4ch\" (UniqueName: \"kubernetes.io/projected/0b9a447d-54f6-42cc-bd25-42894645e1cf-kube-api-access-zm4ch\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.502161 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.510184 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555550-hbp7f" event={"ID":"0b9a447d-54f6-42cc-bd25-42894645e1cf","Type":"ContainerDied","Data":"f5813823418157f010c2aef4a9bb442a6438a6c8bc8ef2bf7bff3ad95c24ba52"} Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.510243 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5813823418157f010c2aef4a9bb442a6438a6c8bc8ef2bf7bff3ad95c24ba52" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.595662 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44"] Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.604836 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.819223 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-49t6h" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.962646 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-config-data\") pod \"e192db3c-31bd-4855-bebd-d30764c224cf\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.962738 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5l4\" (UniqueName: \"kubernetes.io/projected/e192db3c-31bd-4855-bebd-d30764c224cf-kube-api-access-8f5l4\") pod \"e192db3c-31bd-4855-bebd-d30764c224cf\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.962774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-scripts\") pod \"e192db3c-31bd-4855-bebd-d30764c224cf\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.962979 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-combined-ca-bundle\") pod \"e192db3c-31bd-4855-bebd-d30764c224cf\" (UID: \"e192db3c-31bd-4855-bebd-d30764c224cf\") " Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.967941 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-scripts" (OuterVolumeSpecName: "scripts") pod "e192db3c-31bd-4855-bebd-d30764c224cf" (UID: "e192db3c-31bd-4855-bebd-d30764c224cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.967948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e192db3c-31bd-4855-bebd-d30764c224cf-kube-api-access-8f5l4" (OuterVolumeSpecName: "kube-api-access-8f5l4") pod "e192db3c-31bd-4855-bebd-d30764c224cf" (UID: "e192db3c-31bd-4855-bebd-d30764c224cf"). InnerVolumeSpecName "kube-api-access-8f5l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:10 crc kubenswrapper[4687]: I0312 16:30:10.999881 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-config-data" (OuterVolumeSpecName: "config-data") pod "e192db3c-31bd-4855-bebd-d30764c224cf" (UID: "e192db3c-31bd-4855-bebd-d30764c224cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.005460 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e192db3c-31bd-4855-bebd-d30764c224cf" (UID: "e192db3c-31bd-4855-bebd-d30764c224cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.068032 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.068648 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.068874 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f5l4\" (UniqueName: \"kubernetes.io/projected/e192db3c-31bd-4855-bebd-d30764c224cf-kube-api-access-8f5l4\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.069059 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e192db3c-31bd-4855-bebd-d30764c224cf-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.164255 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555544-wmmm5"] Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.175708 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555544-wmmm5"] Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.526983 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" event={"ID":"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7","Type":"ContainerStarted","Data":"291548d6e44099c387ee15fd78c01cb6ab7bd0b508a262371b214e9128b2c800"} Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.536278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-49t6h" event={"ID":"e192db3c-31bd-4855-bebd-d30764c224cf","Type":"ContainerDied","Data":"6e60abd938e1100b17f7aaa2cd39747b2512d28badf53e879fecd55cb82d6e8c"} Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.536542 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e60abd938e1100b17f7aaa2cd39747b2512d28badf53e879fecd55cb82d6e8c" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.536343 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-49t6h" Mar 12 16:30:11 crc kubenswrapper[4687]: I0312 16:30:11.809510 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2e2820-00f3-4111-96a4-4d378529f9fd" path="/var/lib/kubelet/pods/9b2e2820-00f3-4111-96a4-4d378529f9fd/volumes" Mar 12 16:30:12 crc kubenswrapper[4687]: I0312 16:30:12.550912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" event={"ID":"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7","Type":"ContainerStarted","Data":"39f332d19b2823eaa43ee2087af2d8a2c4fabcee1a72694d6b19a236d8732089"} Mar 12 16:30:12 crc kubenswrapper[4687]: I0312 16:30:12.584509 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" podStartSLOduration=2.809083354 podStartE2EDuration="3.58448744s" podCreationTimestamp="2026-03-12 16:30:09 +0000 UTC" firstStartedPulling="2026-03-12 16:30:10.604616013 +0000 UTC m=+1659.568578357" lastFinishedPulling="2026-03-12 16:30:11.380020099 +0000 UTC m=+1660.343982443" observedRunningTime="2026-03-12 16:30:12.571876585 +0000 UTC m=+1661.535838949" watchObservedRunningTime="2026-03-12 16:30:12.58448744 +0000 UTC m=+1661.548449794" Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.395670 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.677095 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.748485 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.988181 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.988458 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-api" containerID="cri-o://b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf" gracePeriod=30 Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.988925 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-listener" containerID="cri-o://985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b" gracePeriod=30 Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.988973 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-notifier" containerID="cri-o://f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81" gracePeriod=30 Mar 12 16:30:13 crc kubenswrapper[4687]: I0312 16:30:13.989004 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-evaluator" containerID="cri-o://52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5" gracePeriod=30 Mar 12 16:30:14 crc kubenswrapper[4687]: I0312 16:30:14.575563 4687 generic.go:334] "Generic (PLEG): container finished" podID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerID="52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5" exitCode=0 Mar 12 16:30:14 crc kubenswrapper[4687]: I0312 16:30:14.576210 4687 generic.go:334] "Generic (PLEG): container finished" podID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerID="b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf" exitCode=0 Mar 12 16:30:14 crc kubenswrapper[4687]: I0312 16:30:14.575630 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerDied","Data":"52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5"} Mar 12 16:30:14 crc kubenswrapper[4687]: I0312 16:30:14.576252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerDied","Data":"b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf"} Mar 12 16:30:14 crc kubenswrapper[4687]: I0312 16:30:14.578531 4687 generic.go:334] "Generic (PLEG): container finished" podID="5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" containerID="39f332d19b2823eaa43ee2087af2d8a2c4fabcee1a72694d6b19a236d8732089" exitCode=0 Mar 12 16:30:14 crc kubenswrapper[4687]: I0312 16:30:14.578582 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" event={"ID":"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7","Type":"ContainerDied","Data":"39f332d19b2823eaa43ee2087af2d8a2c4fabcee1a72694d6b19a236d8732089"} Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.142822 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.223983 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-inventory\") pod \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.224424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-ssh-key-openstack-edpm-ipam\") pod \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.224508 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fw4\" (UniqueName: \"kubernetes.io/projected/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-kube-api-access-s9fw4\") pod \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\" (UID: \"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7\") " Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.246661 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-kube-api-access-s9fw4" (OuterVolumeSpecName: "kube-api-access-s9fw4") pod "5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" (UID: "5606d9d1-3ffd-4420-934b-b0e3c9ac86b7"). InnerVolumeSpecName "kube-api-access-s9fw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.306519 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-inventory" (OuterVolumeSpecName: "inventory") pod "5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" (UID: "5606d9d1-3ffd-4420-934b-b0e3c9ac86b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.311247 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" (UID: "5606d9d1-3ffd-4420-934b-b0e3c9ac86b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.328328 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.328401 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fw4\" (UniqueName: \"kubernetes.io/projected/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-kube-api-access-s9fw4\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.328415 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5606d9d1-3ffd-4420-934b-b0e3c9ac86b7-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.600040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" event={"ID":"5606d9d1-3ffd-4420-934b-b0e3c9ac86b7","Type":"ContainerDied","Data":"291548d6e44099c387ee15fd78c01cb6ab7bd0b508a262371b214e9128b2c800"} Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.600078 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291548d6e44099c387ee15fd78c01cb6ab7bd0b508a262371b214e9128b2c800" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.600103 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-cqt44" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.696937 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf"] Mar 12 16:30:16 crc kubenswrapper[4687]: E0312 16:30:16.697782 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e192db3c-31bd-4855-bebd-d30764c224cf" containerName="aodh-db-sync" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.697799 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e192db3c-31bd-4855-bebd-d30764c224cf" containerName="aodh-db-sync" Mar 12 16:30:16 crc kubenswrapper[4687]: E0312 16:30:16.697815 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9a447d-54f6-42cc-bd25-42894645e1cf" containerName="oc" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.697829 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9a447d-54f6-42cc-bd25-42894645e1cf" containerName="oc" Mar 12 16:30:16 crc kubenswrapper[4687]: E0312 16:30:16.697859 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.697866 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.698087 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e192db3c-31bd-4855-bebd-d30764c224cf" containerName="aodh-db-sync" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.698110 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5606d9d1-3ffd-4420-934b-b0e3c9ac86b7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.698121 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9a447d-54f6-42cc-bd25-42894645e1cf" containerName="oc" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.699043 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.707150 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf"] Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.708984 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.709568 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.709605 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.709615 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.841819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2dx\" (UniqueName: \"kubernetes.io/projected/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-kube-api-access-xx2dx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.841947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.842006 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.842195 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.944832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.944987 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx2dx\" (UniqueName: \"kubernetes.io/projected/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-kube-api-access-xx2dx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.945254 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.945329 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.949883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.950305 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.951785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:16 crc kubenswrapper[4687]: I0312 16:30:16.975078 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx2dx\" (UniqueName: \"kubernetes.io/projected/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-kube-api-access-xx2dx\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:17 crc kubenswrapper[4687]: I0312 16:30:17.038475 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:30:17 crc kubenswrapper[4687]: I0312 16:30:17.628761 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf"] Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.647547 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" event={"ID":"e5e2e312-1292-454f-8b27-6a6a43fe4a1e","Type":"ContainerStarted","Data":"b816ddb30277d6afe8038e7164fb2a6de462da8a3892ac751cb9765087aaa5f7"} Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.648034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" event={"ID":"e5e2e312-1292-454f-8b27-6a6a43fe4a1e","Type":"ContainerStarted","Data":"606ecb5991955481d39030f595bc5fcb1359bb37e0a615d270512f827611ba5d"} Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.653422 4687 generic.go:334] "Generic (PLEG): container finished" podID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerID="985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b" exitCode=0 Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.653511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerDied","Data":"985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b"} Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.715985 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" podStartSLOduration=2.340730846 podStartE2EDuration="2.715965147s" podCreationTimestamp="2026-03-12 16:30:16 +0000 UTC" firstStartedPulling="2026-03-12 16:30:17.630953409 +0000 UTC m=+1666.594915753" lastFinishedPulling="2026-03-12 16:30:18.00618771 +0000 UTC m=+1666.970150054" observedRunningTime="2026-03-12 16:30:18.674165512 +0000 UTC m=+1667.638127866" watchObservedRunningTime="2026-03-12 16:30:18.715965147 +0000 UTC m=+1667.679927491" Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.734868 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="rabbitmq" containerID="cri-o://aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7" gracePeriod=604796 Mar 12 16:30:18 crc kubenswrapper[4687]: I0312 16:30:18.872903 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Mar 12 16:30:19 crc kubenswrapper[4687]: I0312 16:30:19.734578 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:30:19 crc kubenswrapper[4687]: E0312 16:30:19.735199 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.405543 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.558078 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnn2b\" (UniqueName: \"kubernetes.io/projected/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-kube-api-access-bnn2b\") pod \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.558441 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-config-data\") pod \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.558523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-scripts\") pod \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.558551 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-internal-tls-certs\") pod \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.558701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-combined-ca-bundle\") pod \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.558806 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-public-tls-certs\") pod \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\" (UID: \"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89\") " Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.576745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-kube-api-access-bnn2b" (OuterVolumeSpecName: "kube-api-access-bnn2b") pod "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" (UID: "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89"). InnerVolumeSpecName "kube-api-access-bnn2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.577501 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-scripts" (OuterVolumeSpecName: "scripts") pod "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" (UID: "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.655592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" (UID: "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.657072 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" (UID: "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.660909 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnn2b\" (UniqueName: \"kubernetes.io/projected/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-kube-api-access-bnn2b\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.660938 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.660948 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.660956 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.691877 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" (UID: "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.759012 4687 generic.go:334] "Generic (PLEG): container finished" podID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerID="f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81" exitCode=0 Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.759093 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.759118 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerDied","Data":"f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81"} Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.759676 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"674a82b3-79e5-4e5f-9e0e-0fa47dc24b89","Type":"ContainerDied","Data":"4fe859d825e13fc90e25ce154b2df3da083c27c98fe835fb8676eab88e885362"} Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.760894 4687 scope.go:117] "RemoveContainer" containerID="985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.763337 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.768115 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-config-data" (OuterVolumeSpecName: "config-data") pod "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" (UID: "674a82b3-79e5-4e5f-9e0e-0fa47dc24b89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.800370 4687 scope.go:117] "RemoveContainer" containerID="f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.833744 4687 scope.go:117] "RemoveContainer" containerID="52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.865631 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.881815 4687 scope.go:117] "RemoveContainer" containerID="b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.948753 4687 scope.go:117] "RemoveContainer" containerID="985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b" Mar 12 16:30:24 crc kubenswrapper[4687]: E0312 16:30:24.949199 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b\": container with ID starting with 985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b not found: ID does not exist" containerID="985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.949316 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b"} err="failed to get container status \"985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b\": rpc error: code = NotFound desc = could not find container \"985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b\": container with ID starting with 985c0119dfdce713839ce10fdb456bc5c40855f9fcffe33eda173b59f10a020b not found: ID does not exist" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.949422 4687 scope.go:117] "RemoveContainer" containerID="f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81" Mar 12 16:30:24 crc kubenswrapper[4687]: E0312 16:30:24.954061 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81\": container with ID starting with f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81 not found: ID does not exist" containerID="f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.954224 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81"} err="failed to get container status \"f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81\": rpc error: code = NotFound desc = could not find container \"f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81\": container with ID starting with f76cf081f38380fae89d0e7562911dde601d3d89f80a247fc02092949c818c81 not found: ID does not exist" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.954302 4687 scope.go:117] "RemoveContainer" containerID="52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5" Mar 12 16:30:24 crc kubenswrapper[4687]: E0312 16:30:24.954696 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5\": container with ID starting with 52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5 not found: ID does not exist" containerID="52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.954776 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5"} err="failed to get container status \"52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5\": rpc error: code = NotFound desc = could not find container \"52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5\": container with ID starting with 52b22f12d7b8e4b1d360f674a1e223f3f0c5da060e3af4c696c41d1c94b156a5 not found: ID does not exist" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.954888 4687 scope.go:117] "RemoveContainer" containerID="b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf" Mar 12 16:30:24 crc kubenswrapper[4687]: E0312 16:30:24.958785 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf\": container with ID starting with b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf not found: ID does not exist" containerID="b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf" Mar 12 16:30:24 crc kubenswrapper[4687]: I0312 16:30:24.958826 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf"} err="failed to get container status \"b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf\": rpc error: code = NotFound desc = could not find container \"b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf\": container with ID starting with b46c9f4705e1f01e6447191f04b6de4869860aa68b059e09258de9b290f92bdf not found: ID does not exist" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.129071 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.142329 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.155512 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.155937 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-api" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.155952 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-api" Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.155967 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-listener" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.155974 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-listener" Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.156022 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-notifier" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.156029 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-notifier" Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.156049 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-evaluator" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.156055 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-evaluator" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.156250 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-api" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.156272 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-notifier" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.156284 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-evaluator" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.156308 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" containerName="aodh-listener" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.158160 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.163349 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.163492 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.163649 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.163807 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-w4khk" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.167700 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.172442 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.179578 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-internal-tls-certs\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.179819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.179939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvrv\" (UniqueName: \"kubernetes.io/projected/281c52ef-94bc-4850-b95a-ca740095f39b-kube-api-access-vdvrv\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.185729 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-config-data\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.185791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-public-tls-certs\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.185854 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-scripts\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.287498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.287644 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdvrv\" (UniqueName: \"kubernetes.io/projected/281c52ef-94bc-4850-b95a-ca740095f39b-kube-api-access-vdvrv\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.287701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-config-data\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.287726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-public-tls-certs\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.287777 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-scripts\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.288457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-internal-tls-certs\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.292870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-scripts\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.292971 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.307575 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-config-data\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.309147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-internal-tls-certs\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.309239 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/281c52ef-94bc-4850-b95a-ca740095f39b-public-tls-certs\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.309578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdvrv\" (UniqueName: \"kubernetes.io/projected/281c52ef-94bc-4850-b95a-ca740095f39b-kube-api-access-vdvrv\") pod \"aodh-0\" (UID: \"281c52ef-94bc-4850-b95a-ca740095f39b\") " pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.476214 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.480916 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.495145 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e395399-5a85-445f-9b56-4036687b73b6-erlang-cookie-secret\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.495240 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bst5n\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-kube-api-access-bst5n\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.496764 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.496869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-plugins\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.496927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-plugins-conf\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497078 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-confd\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-config-data\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497198 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e395399-5a85-445f-9b56-4036687b73b6-pod-info\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-tls\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-erlang-cookie\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497335 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-server-conf\") pod \"4e395399-5a85-445f-9b56-4036687b73b6\" (UID: \"4e395399-5a85-445f-9b56-4036687b73b6\") " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.497972 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.498528 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.501460 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e395399-5a85-445f-9b56-4036687b73b6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.502045 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4e395399-5a85-445f-9b56-4036687b73b6-pod-info" (OuterVolumeSpecName: "pod-info") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.504838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-kube-api-access-bst5n" (OuterVolumeSpecName: "kube-api-access-bst5n") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "kube-api-access-bst5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.506369 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.507011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.540703 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.571115 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-config-data" (OuterVolumeSpecName: "config-data") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.572964 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa" (OuterVolumeSpecName: "persistence") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.607233 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.607730 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e395399-5a85-445f-9b56-4036687b73b6-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.607794 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.607856 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.607912 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e395399-5a85-445f-9b56-4036687b73b6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.608003 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") on node \"crc\" " Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.608069 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bst5n\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-kube-api-access-bst5n\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.608134 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.739490 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-server-conf" (OuterVolumeSpecName: "server-conf") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.769982 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.770562 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa") on node "crc" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.780154 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674a82b3-79e5-4e5f-9e0e-0fa47dc24b89" path="/var/lib/kubelet/pods/674a82b3-79e5-4e5f-9e0e-0fa47dc24b89/volumes" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.783716 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4e395399-5a85-445f-9b56-4036687b73b6" (UID: "4e395399-5a85-445f-9b56-4036687b73b6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.818603 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e395399-5a85-445f-9b56-4036687b73b6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.818642 4687 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e395399-5a85-445f-9b56-4036687b73b6-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.818657 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") on node \"crc\" DevicePath \"\"" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.823175 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e395399-5a85-445f-9b56-4036687b73b6" containerID="aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7" exitCode=0 Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.823339 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.824852 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4e395399-5a85-445f-9b56-4036687b73b6","Type":"ContainerDied","Data":"aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7"} Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.824891 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"4e395399-5a85-445f-9b56-4036687b73b6","Type":"ContainerDied","Data":"d7869bc2279b98a0de8bb2b6acfce73b67a26fc7645d4a93ef4c1a0e669db224"} Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.824907 4687 scope.go:117] "RemoveContainer" containerID="aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.878538 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.889652 4687 scope.go:117] "RemoveContainer" containerID="a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.892399 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.912224 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.912923 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="rabbitmq" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.912947 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="rabbitmq" Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.912975 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="setup-container" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.912984 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="setup-container" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.913282 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e395399-5a85-445f-9b56-4036687b73b6" containerName="rabbitmq" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.914720 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.926191 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.989459 4687 scope.go:117] "RemoveContainer" containerID="aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7" Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.990161 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7\": container with ID starting with aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7 not found: ID does not exist" containerID="aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.990192 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7"} err="failed to get container status \"aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7\": rpc error: code = NotFound desc = could not find container \"aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7\": container with ID starting with aab136091795230f3636cf8b98184dc0da841be78450d26b501e6b20c256a7c7 not found: ID does not exist" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.990213 4687 scope.go:117] "RemoveContainer" containerID="a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878" Mar 12 16:30:25 crc kubenswrapper[4687]: E0312 16:30:25.990475 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878\": container with ID starting with a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878 not found: ID does not exist" containerID="a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878" Mar 12 16:30:25 crc kubenswrapper[4687]: I0312 16:30:25.990510 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878"} err="failed to get container status \"a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878\": rpc error: code = NotFound desc = could not find container \"a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878\": container with ID starting with a79997d6f386120588a8bc7d52a73f07327ff138fc995dbd0d2a37782cc79878 not found: ID does not exist" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.022718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.022767 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8q8\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-kube-api-access-nj8q8\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.022810 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8df491ce-9158-4b80-958f-2008e3280c07-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.022968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023473 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8df491ce-9158-4b80-958f-2008e3280c07-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.023999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-config-data\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.086992 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.125878 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.125992 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8df491ce-9158-4b80-958f-2008e3280c07-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126075 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-config-data\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126209 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126228 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8q8\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-kube-api-access-nj8q8\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126267 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8df491ce-9158-4b80-958f-2008e3280c07-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.126962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.127677 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.127845 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-server-conf\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.128454 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8df491ce-9158-4b80-958f-2008e3280c07-config-data\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.131972 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.132472 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2124df05dbbe8a852954440b466035202297df9f813faaec4e3f5c591c134b4b/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.133677 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8df491ce-9158-4b80-958f-2008e3280c07-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.133807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.133894 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8df491ce-9158-4b80-958f-2008e3280c07-pod-info\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.137702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.145122 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8q8\" (UniqueName: \"kubernetes.io/projected/8df491ce-9158-4b80-958f-2008e3280c07-kube-api-access-nj8q8\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.228003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d521d7bb-c6ae-475e-a1c8-190cf2ebebaa\") pod \"rabbitmq-server-1\" (UID: \"8df491ce-9158-4b80-958f-2008e3280c07\") " pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.243877 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.824669 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.863490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8df491ce-9158-4b80-958f-2008e3280c07","Type":"ContainerStarted","Data":"719a4023087ac3ecf64b0fffe256795b8c8a7fd481cb5c804f5378dc74f95029"} Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.871167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"281c52ef-94bc-4850-b95a-ca740095f39b","Type":"ContainerStarted","Data":"87941ae32b7425d102e8d5cd3f509f0cb3db17ec4ba159b0d5173522b7ea0f5d"} Mar 12 16:30:26 crc kubenswrapper[4687]: I0312 16:30:26.871210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"281c52ef-94bc-4850-b95a-ca740095f39b","Type":"ContainerStarted","Data":"3e793470c9dd2ddcd173df193e843f75c78268f204b0d1279823ddc27b7c5e5e"} Mar 12 16:30:27 crc kubenswrapper[4687]: I0312 16:30:27.749000 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e395399-5a85-445f-9b56-4036687b73b6" path="/var/lib/kubelet/pods/4e395399-5a85-445f-9b56-4036687b73b6/volumes" Mar 12 16:30:27 crc kubenswrapper[4687]: I0312 16:30:27.895610 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"281c52ef-94bc-4850-b95a-ca740095f39b","Type":"ContainerStarted","Data":"f3d3ec3cfd42745c9f36cdb1a04f518f8be6665c56bfbdcf448634644fe97349"} Mar 12 16:30:28 crc kubenswrapper[4687]: I0312 16:30:28.048136 4687 scope.go:117] "RemoveContainer" containerID="2a394af9b2d5ca8a400127e9212ed0b768cf1c40d2bd3bf9ba1abf09eacbd363" Mar 12 16:30:28 crc kubenswrapper[4687]: I0312 16:30:28.138957 4687 scope.go:117] "RemoveContainer" containerID="31d8b51e69cfc98e59ae0b5d16e21902f38466446b0e07db8bbceabf6211ff5b" Mar 12 16:30:28 crc kubenswrapper[4687]: I0312 16:30:28.182186 4687 scope.go:117] "RemoveContainer" containerID="4e2270bd184b3941096bbd26dd80a4d17af176b06faf227eb4d015fb52cb0d49" Mar 12 16:30:28 crc kubenswrapper[4687]: I0312 16:30:28.754278 4687 scope.go:117] "RemoveContainer" containerID="a522b6c90f7b5fa65d7452ae3a2f13101a83b45dff291a8f1ead8c7d88ab535c" Mar 12 16:30:28 crc kubenswrapper[4687]: I0312 16:30:28.910311 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8df491ce-9158-4b80-958f-2008e3280c07","Type":"ContainerStarted","Data":"a46db51d8e77468a2927bc93358015b4dfd75caf24c876ade47425f43a6e870c"} Mar 12 16:30:29 crc kubenswrapper[4687]: I0312 16:30:29.924349 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"281c52ef-94bc-4850-b95a-ca740095f39b","Type":"ContainerStarted","Data":"5e2b1ac6b1917e20138b3d5315e08a25c250f3bcfbd60a735acdaec4b7287592"} Mar 12 16:30:30 crc kubenswrapper[4687]: I0312 16:30:30.733393 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:30:30 crc kubenswrapper[4687]: E0312 16:30:30.733866 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:30:30 crc kubenswrapper[4687]: I0312 16:30:30.937382 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"281c52ef-94bc-4850-b95a-ca740095f39b","Type":"ContainerStarted","Data":"3c140f1a3ab072c31eaf3c7b0c8110f405147ec6db947e1d0dc2adf687499296"} Mar 12 16:30:30 crc kubenswrapper[4687]: I0312 16:30:30.971323 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.005635937 podStartE2EDuration="5.971298292s" podCreationTimestamp="2026-03-12 16:30:25 +0000 UTC" firstStartedPulling="2026-03-12 16:30:26.087637934 +0000 UTC m=+1675.051600278" lastFinishedPulling="2026-03-12 16:30:30.053300279 +0000 UTC m=+1679.017262633" observedRunningTime="2026-03-12 16:30:30.958172033 +0000 UTC m=+1679.922134387" watchObservedRunningTime="2026-03-12 16:30:30.971298292 +0000 UTC m=+1679.935260646" Mar 12 16:30:42 crc kubenswrapper[4687]: I0312 16:30:42.733535 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:30:42 crc kubenswrapper[4687]: E0312 16:30:42.734787 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:30:54 crc kubenswrapper[4687]: I0312 16:30:54.732593 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:30:54 crc kubenswrapper[4687]: E0312 16:30:54.733468 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:31:01 crc kubenswrapper[4687]: I0312 16:31:01.324871 4687 generic.go:334] "Generic (PLEG): container finished" podID="8df491ce-9158-4b80-958f-2008e3280c07" containerID="a46db51d8e77468a2927bc93358015b4dfd75caf24c876ade47425f43a6e870c" exitCode=0 Mar 12 16:31:01 crc kubenswrapper[4687]: I0312 16:31:01.324961 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8df491ce-9158-4b80-958f-2008e3280c07","Type":"ContainerDied","Data":"a46db51d8e77468a2927bc93358015b4dfd75caf24c876ade47425f43a6e870c"} Mar 12 16:31:02 crc kubenswrapper[4687]: I0312 16:31:02.340560 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"8df491ce-9158-4b80-958f-2008e3280c07","Type":"ContainerStarted","Data":"95130a657f6f8f8c312866f5b802a9428c87f11bc7a3f4248b16588528c5e4a3"} Mar 12 16:31:02 crc kubenswrapper[4687]: I0312 16:31:02.342265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 12 16:31:02 crc kubenswrapper[4687]: I0312 16:31:02.378553 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.37852091 podStartE2EDuration="37.37852091s" podCreationTimestamp="2026-03-12 16:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:31:02.366461511 +0000 UTC m=+1711.330423925" watchObservedRunningTime="2026-03-12 16:31:02.37852091 +0000 UTC m=+1711.342483294" Mar 12 16:31:08 crc kubenswrapper[4687]: I0312 16:31:08.733501 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:31:08 crc kubenswrapper[4687]: E0312 16:31:08.734252 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:31:16 crc kubenswrapper[4687]: I0312 16:31:16.247558 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 12 16:31:16 crc kubenswrapper[4687]: I0312 16:31:16.311206 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:31:20 crc kubenswrapper[4687]: I0312 16:31:20.842485 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="rabbitmq" containerID="cri-o://c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e" gracePeriod=604796 Mar 12 16:31:22 crc kubenswrapper[4687]: I0312 16:31:22.734474 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:31:22 crc kubenswrapper[4687]: E0312 16:31:22.735157 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.589039 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.665764 4687 generic.go:334] "Generic (PLEG): container finished" podID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerID="c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e" exitCode=0 Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.665817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ae78a70-0a58-4f39-977b-a4e5ce5b981d","Type":"ContainerDied","Data":"c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e"} Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.665856 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ae78a70-0a58-4f39-977b-a4e5ce5b981d","Type":"ContainerDied","Data":"d5bca58032342dd2bffd6685022680f009df6193fb433ca773b747c5316e7fba"} Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.665883 4687 scope.go:117] "RemoveContainer" containerID="c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.666145 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.671365 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-erlang-cookie-secret\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.671644 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-pod-info\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672416 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672457 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-tls\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-plugins\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-erlang-cookie\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-server-conf\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672822 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-config-data\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.672889 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-confd\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.673153 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vbr7\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-kube-api-access-4vbr7\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.673238 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-plugins-conf\") pod \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\" (UID: \"7ae78a70-0a58-4f39-977b-a4e5ce5b981d\") " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.674018 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.674738 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.674751 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.678771 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.697857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-kube-api-access-4vbr7" (OuterVolumeSpecName: "kube-api-access-4vbr7") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "kube-api-access-4vbr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.700350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-pod-info" (OuterVolumeSpecName: "pod-info") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.701238 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.710404 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.778000 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vbr7\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-kube-api-access-4vbr7\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.778027 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.778037 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.778048 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-pod-info\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.778059 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.778069 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.781133 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e" (OuterVolumeSpecName: "persistence") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "pvc-29745512-05db-4b6a-80e0-2365ecb1933e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.790511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-config-data" (OuterVolumeSpecName: "config-data") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.834224 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-server-conf" (OuterVolumeSpecName: "server-conf") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.845325 4687 scope.go:117] "RemoveContainer" containerID="539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.860570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7ae78a70-0a58-4f39-977b-a4e5ce5b981d" (UID: "7ae78a70-0a58-4f39-977b-a4e5ce5b981d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.883572 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") on node \"crc\" " Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.883604 4687 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-server-conf\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.883616 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.884161 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ae78a70-0a58-4f39-977b-a4e5ce5b981d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.897082 4687 scope.go:117] "RemoveContainer" containerID="c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e" Mar 12 16:31:27 crc kubenswrapper[4687]: E0312 16:31:27.898558 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e\": container with ID starting with c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e not found: ID does not exist" containerID="c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.898594 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e"} err="failed to get container status \"c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e\": rpc error: code = NotFound desc = could not find container \"c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e\": container with ID starting with c85bb3f109524116305d8805ea5dee620383c5b0d76f88b05e0a089fb88ceb6e not found: ID does not exist" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.898613 4687 scope.go:117] "RemoveContainer" containerID="539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac" Mar 12 16:31:27 crc kubenswrapper[4687]: E0312 16:31:27.898939 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac\": container with ID starting with 539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac not found: ID does not exist" containerID="539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.898953 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac"} err="failed to get container status \"539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac\": rpc error: code = NotFound desc = could not find container \"539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac\": container with ID starting with 539c8f9318eea456a8c91013ea6f3b127882697809e27996a85d83546526fcac not found: ID does not exist" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.923277 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.923492 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-29745512-05db-4b6a-80e0-2365ecb1933e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e") on node "crc" Mar 12 16:31:27 crc kubenswrapper[4687]: I0312 16:31:27.988942 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") on node \"crc\" DevicePath \"\"" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.017213 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.041935 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.060846 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:31:28 crc kubenswrapper[4687]: E0312 16:31:28.061437 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="setup-container" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.061456 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="setup-container" Mar 12 16:31:28 crc kubenswrapper[4687]: E0312 16:31:28.061465 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="rabbitmq" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.061472 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="rabbitmq" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.061717 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" containerName="rabbitmq" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.064086 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.069407 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.090701 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-config-data\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f00fce39-19aa-4b10-9e76-04c114232731-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091216 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091263 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f00fce39-19aa-4b10-9e76-04c114232731-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091488 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv92b\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-kube-api-access-wv92b\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091648 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091694 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091739 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091787 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.091926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193534 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv92b\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-kube-api-access-wv92b\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193632 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193665 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193734 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193766 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-config-data\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193791 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f00fce39-19aa-4b10-9e76-04c114232731-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f00fce39-19aa-4b10-9e76-04c114232731-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.193889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.195489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.195946 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-config-data\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.196193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.196245 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f00fce39-19aa-4b10-9e76-04c114232731-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.196580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.198541 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.198686 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.198802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f00fce39-19aa-4b10-9e76-04c114232731-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.209038 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f00fce39-19aa-4b10-9e76-04c114232731-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.212247 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.212278 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bdd724b6c4e3941fdc396afd4b3640dbe7d2e939f05c888d1f1536d2ca37cbe0/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.213511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv92b\" (UniqueName: \"kubernetes.io/projected/f00fce39-19aa-4b10-9e76-04c114232731-kube-api-access-wv92b\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.294419 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-29745512-05db-4b6a-80e0-2365ecb1933e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29745512-05db-4b6a-80e0-2365ecb1933e\") pod \"rabbitmq-server-0\" (UID: \"f00fce39-19aa-4b10-9e76-04c114232731\") " pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.388571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 16:31:28 crc kubenswrapper[4687]: I0312 16:31:28.899829 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 16:31:28 crc kubenswrapper[4687]: W0312 16:31:28.902857 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00fce39_19aa_4b10_9e76_04c114232731.slice/crio-cbc4db77e210038103d50739abca25a60641107760e3d887607d91bc23eb722b WatchSource:0}: Error finding container cbc4db77e210038103d50739abca25a60641107760e3d887607d91bc23eb722b: Status 404 returned error can't find the container with id cbc4db77e210038103d50739abca25a60641107760e3d887607d91bc23eb722b Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.075451 4687 scope.go:117] "RemoveContainer" containerID="6340e8790af705ccc4145206bdca7b0f3920646ee90ca2fc1000668d160c4a09" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.105215 4687 scope.go:117] "RemoveContainer" containerID="2613474324dd4b9c6c1b6f5102decd6086eff68bcade87f29fbc54684b948e08" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.199783 4687 scope.go:117] "RemoveContainer" containerID="01ef395583a96bd2bd9e7aea00f0d08dc770a624aed405cceb7e312fded76ec3" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.235101 4687 scope.go:117] "RemoveContainer" containerID="6b55d40ab435020e60158b8ab39b028d06aaa300834b855272a05efefc836b8e" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.264083 4687 scope.go:117] "RemoveContainer" containerID="3ddc8db5928616656f2964d46677648cda6f9c7ffd840ee20ebb88fb57e174fb" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.290010 4687 scope.go:117] "RemoveContainer" containerID="97fd5a5ead64e55a87cb413325440d309964310bd709e90795867c2632966c50" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.317082 4687 scope.go:117] "RemoveContainer" containerID="1661c28baa01e9cfaa3aa820ae239bdea1de27b52ffe4d86baf84f587eef3221" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.347689 4687 scope.go:117] "RemoveContainer" containerID="4ece12bc9af7ebaad8d254643ab96d23d6f56a775e868a540bb91d32250926ba" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.372904 4687 scope.go:117] "RemoveContainer" containerID="82d8b337c242758d743ba73ffa4db75407e2644bd2c642ca03f61a17b584cba6" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.444143 4687 scope.go:117] "RemoveContainer" containerID="93f50d0260b22a69b834c0a00acc8050fadcde322b69ab711c598bcb4e0e2e61" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.508744 4687 scope.go:117] "RemoveContainer" containerID="9e03df690c2ab0a0255523ecfa32f26fa1ebe41e3863e2dce435b26c4b3e0293" Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.695946 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f00fce39-19aa-4b10-9e76-04c114232731","Type":"ContainerStarted","Data":"cbc4db77e210038103d50739abca25a60641107760e3d887607d91bc23eb722b"} Mar 12 16:31:29 crc kubenswrapper[4687]: I0312 16:31:29.770529 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae78a70-0a58-4f39-977b-a4e5ce5b981d" path="/var/lib/kubelet/pods/7ae78a70-0a58-4f39-977b-a4e5ce5b981d/volumes" Mar 12 16:31:31 crc kubenswrapper[4687]: I0312 16:31:31.730805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f00fce39-19aa-4b10-9e76-04c114232731","Type":"ContainerStarted","Data":"e640c307ecab1bdcfe9a41282562412a756caf9a1132c4712f640da93e7750df"} Mar 12 16:31:34 crc kubenswrapper[4687]: I0312 16:31:34.732530 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:31:34 crc kubenswrapper[4687]: E0312 16:31:34.733108 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:31:48 crc kubenswrapper[4687]: I0312 16:31:48.733458 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:31:48 crc kubenswrapper[4687]: E0312 16:31:48.734479 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:31:59 crc kubenswrapper[4687]: I0312 16:31:59.733827 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:31:59 crc kubenswrapper[4687]: E0312 16:31:59.734731 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.205968 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555552-xcqt2"] Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.207697 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.209639 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.215199 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.215226 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.218574 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555552-xcqt2"] Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.355498 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfnl\" (UniqueName: \"kubernetes.io/projected/dd4248d8-d237-4160-9d5a-8f7d232e252b-kube-api-access-4vfnl\") pod \"auto-csr-approver-29555552-xcqt2\" (UID: \"dd4248d8-d237-4160-9d5a-8f7d232e252b\") " pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.458680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfnl\" (UniqueName: \"kubernetes.io/projected/dd4248d8-d237-4160-9d5a-8f7d232e252b-kube-api-access-4vfnl\") pod \"auto-csr-approver-29555552-xcqt2\" (UID: \"dd4248d8-d237-4160-9d5a-8f7d232e252b\") " pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.477096 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfnl\" (UniqueName: \"kubernetes.io/projected/dd4248d8-d237-4160-9d5a-8f7d232e252b-kube-api-access-4vfnl\") pod \"auto-csr-approver-29555552-xcqt2\" (UID: \"dd4248d8-d237-4160-9d5a-8f7d232e252b\") " pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.532924 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:00 crc kubenswrapper[4687]: I0312 16:32:00.990906 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555552-xcqt2"] Mar 12 16:32:01 crc kubenswrapper[4687]: I0312 16:32:01.088746 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" event={"ID":"dd4248d8-d237-4160-9d5a-8f7d232e252b","Type":"ContainerStarted","Data":"44c750a20e64b634ce581d3a009d7c56f710af6effc99879c0d5135305ce189d"} Mar 12 16:32:03 crc kubenswrapper[4687]: I0312 16:32:03.112560 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd4248d8-d237-4160-9d5a-8f7d232e252b" containerID="378e3190178324307210a5cdd0ebac2f75371511d80cff86703c532d94cb282f" exitCode=0 Mar 12 16:32:03 crc kubenswrapper[4687]: I0312 16:32:03.113628 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" event={"ID":"dd4248d8-d237-4160-9d5a-8f7d232e252b","Type":"ContainerDied","Data":"378e3190178324307210a5cdd0ebac2f75371511d80cff86703c532d94cb282f"} Mar 12 16:32:04 crc kubenswrapper[4687]: I0312 16:32:04.125335 4687 generic.go:334] "Generic (PLEG): container finished" podID="f00fce39-19aa-4b10-9e76-04c114232731" containerID="e640c307ecab1bdcfe9a41282562412a756caf9a1132c4712f640da93e7750df" exitCode=0 Mar 12 16:32:04 crc kubenswrapper[4687]: I0312 16:32:04.125412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f00fce39-19aa-4b10-9e76-04c114232731","Type":"ContainerDied","Data":"e640c307ecab1bdcfe9a41282562412a756caf9a1132c4712f640da93e7750df"} Mar 12 16:32:04 crc kubenswrapper[4687]: I0312 16:32:04.630203 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:04 crc kubenswrapper[4687]: I0312 16:32:04.757668 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vfnl\" (UniqueName: \"kubernetes.io/projected/dd4248d8-d237-4160-9d5a-8f7d232e252b-kube-api-access-4vfnl\") pod \"dd4248d8-d237-4160-9d5a-8f7d232e252b\" (UID: \"dd4248d8-d237-4160-9d5a-8f7d232e252b\") " Mar 12 16:32:04 crc kubenswrapper[4687]: I0312 16:32:04.761811 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4248d8-d237-4160-9d5a-8f7d232e252b-kube-api-access-4vfnl" (OuterVolumeSpecName: "kube-api-access-4vfnl") pod "dd4248d8-d237-4160-9d5a-8f7d232e252b" (UID: "dd4248d8-d237-4160-9d5a-8f7d232e252b"). InnerVolumeSpecName "kube-api-access-4vfnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:32:04 crc kubenswrapper[4687]: I0312 16:32:04.867779 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vfnl\" (UniqueName: \"kubernetes.io/projected/dd4248d8-d237-4160-9d5a-8f7d232e252b-kube-api-access-4vfnl\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.122762 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6wx"] Mar 12 16:32:05 crc kubenswrapper[4687]: E0312 16:32:05.123343 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4248d8-d237-4160-9d5a-8f7d232e252b" containerName="oc" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.123389 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4248d8-d237-4160-9d5a-8f7d232e252b" containerName="oc" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.123701 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4248d8-d237-4160-9d5a-8f7d232e252b" containerName="oc" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.125851 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.135105 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6wx"] Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.136269 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" event={"ID":"dd4248d8-d237-4160-9d5a-8f7d232e252b","Type":"ContainerDied","Data":"44c750a20e64b634ce581d3a009d7c56f710af6effc99879c0d5135305ce189d"} Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.136293 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555552-xcqt2" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.136315 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c750a20e64b634ce581d3a009d7c56f710af6effc99879c0d5135305ce189d" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.138803 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f00fce39-19aa-4b10-9e76-04c114232731","Type":"ContainerStarted","Data":"a1fe043c636ac8637d81ea88719d4dfba598ffe0b4730cc2f2ac6fc205787fd8"} Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.139778 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.172419 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.172403734 podStartE2EDuration="37.172403734s" podCreationTimestamp="2026-03-12 16:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:32:05.17154602 +0000 UTC m=+1774.135508364" watchObservedRunningTime="2026-03-12 16:32:05.172403734 +0000 UTC m=+1774.136366078" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.177922 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-utilities\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.178400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhgs\" (UniqueName: \"kubernetes.io/projected/09d5574c-eb0a-4471-81eb-ba14039979c3-kube-api-access-rzhgs\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.178681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-catalog-content\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.280553 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhgs\" (UniqueName: \"kubernetes.io/projected/09d5574c-eb0a-4471-81eb-ba14039979c3-kube-api-access-rzhgs\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.280655 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-catalog-content\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.280766 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-utilities\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.281261 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-catalog-content\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.283744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-utilities\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.298866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhgs\" (UniqueName: \"kubernetes.io/projected/09d5574c-eb0a-4471-81eb-ba14039979c3-kube-api-access-rzhgs\") pod \"redhat-marketplace-tr6wx\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.545880 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.754990 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555546-l7bqh"] Mar 12 16:32:05 crc kubenswrapper[4687]: I0312 16:32:05.765826 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555546-l7bqh"] Mar 12 16:32:06 crc kubenswrapper[4687]: I0312 16:32:06.056843 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6wx"] Mar 12 16:32:06 crc kubenswrapper[4687]: W0312 16:32:06.058585 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09d5574c_eb0a_4471_81eb_ba14039979c3.slice/crio-75a048d820e4199bba885395a61fb1b1bbc043b1dd0113531983c327e0b11174 WatchSource:0}: Error finding container 75a048d820e4199bba885395a61fb1b1bbc043b1dd0113531983c327e0b11174: Status 404 returned error can't find the container with id 75a048d820e4199bba885395a61fb1b1bbc043b1dd0113531983c327e0b11174 Mar 12 16:32:06 crc kubenswrapper[4687]: I0312 16:32:06.163887 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerStarted","Data":"75a048d820e4199bba885395a61fb1b1bbc043b1dd0113531983c327e0b11174"} Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.174137 4687 generic.go:334] "Generic (PLEG): container finished" podID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerID="23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e" exitCode=0 Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.174241 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerDied","Data":"23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e"} Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.522256 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9gdrl"] Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.527209 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.538173 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-utilities\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.538328 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-catalog-content\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.538512 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9sd\" (UniqueName: \"kubernetes.io/projected/3f5d94a3-5d8f-4466-abc6-837ea78cb762-kube-api-access-cl9sd\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.552520 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gdrl"] Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.661049 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9sd\" (UniqueName: \"kubernetes.io/projected/3f5d94a3-5d8f-4466-abc6-837ea78cb762-kube-api-access-cl9sd\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.661741 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-utilities\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.663576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-utilities\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.663788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-catalog-content\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.664120 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-catalog-content\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.694093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9sd\" (UniqueName: \"kubernetes.io/projected/3f5d94a3-5d8f-4466-abc6-837ea78cb762-kube-api-access-cl9sd\") pod \"certified-operators-9gdrl\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.751265 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17278eb4-0763-4788-b98d-d58198b0add7" path="/var/lib/kubelet/pods/17278eb4-0763-4788-b98d-d58198b0add7/volumes" Mar 12 16:32:07 crc kubenswrapper[4687]: I0312 16:32:07.851463 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:08 crc kubenswrapper[4687]: I0312 16:32:08.187116 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerStarted","Data":"7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8"} Mar 12 16:32:08 crc kubenswrapper[4687]: W0312 16:32:08.424069 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f5d94a3_5d8f_4466_abc6_837ea78cb762.slice/crio-103f95ae2208ae234dc93552618a8666e437b990810d398d313f86f797be0834 WatchSource:0}: Error finding container 103f95ae2208ae234dc93552618a8666e437b990810d398d313f86f797be0834: Status 404 returned error can't find the container with id 103f95ae2208ae234dc93552618a8666e437b990810d398d313f86f797be0834 Mar 12 16:32:08 crc kubenswrapper[4687]: I0312 16:32:08.449274 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gdrl"] Mar 12 16:32:09 crc kubenswrapper[4687]: I0312 16:32:09.204250 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerID="ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d" exitCode=0 Mar 12 16:32:09 crc kubenswrapper[4687]: I0312 16:32:09.204484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerDied","Data":"ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d"} Mar 12 16:32:09 crc kubenswrapper[4687]: I0312 16:32:09.204665 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerStarted","Data":"103f95ae2208ae234dc93552618a8666e437b990810d398d313f86f797be0834"} Mar 12 16:32:10 crc kubenswrapper[4687]: I0312 16:32:10.218827 4687 generic.go:334] "Generic (PLEG): container finished" podID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerID="7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8" exitCode=0 Mar 12 16:32:10 crc kubenswrapper[4687]: I0312 16:32:10.219289 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerDied","Data":"7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8"} Mar 12 16:32:10 crc kubenswrapper[4687]: I0312 16:32:10.224025 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerStarted","Data":"65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b"} Mar 12 16:32:11 crc kubenswrapper[4687]: I0312 16:32:11.243872 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerStarted","Data":"260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c"} Mar 12 16:32:11 crc kubenswrapper[4687]: I0312 16:32:11.276413 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tr6wx" podStartSLOduration=2.792534433 podStartE2EDuration="6.276391197s" podCreationTimestamp="2026-03-12 16:32:05 +0000 UTC" firstStartedPulling="2026-03-12 16:32:07.178019705 +0000 UTC m=+1776.141982049" lastFinishedPulling="2026-03-12 16:32:10.661876469 +0000 UTC m=+1779.625838813" observedRunningTime="2026-03-12 16:32:11.264610634 +0000 UTC m=+1780.228572988" watchObservedRunningTime="2026-03-12 16:32:11.276391197 +0000 UTC m=+1780.240353551" Mar 12 16:32:12 crc kubenswrapper[4687]: I0312 16:32:12.733548 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:32:12 crc kubenswrapper[4687]: E0312 16:32:12.734457 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:32:13 crc kubenswrapper[4687]: I0312 16:32:13.269185 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerID="65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b" exitCode=0 Mar 12 16:32:13 crc kubenswrapper[4687]: I0312 16:32:13.269240 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerDied","Data":"65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b"} Mar 12 16:32:14 crc kubenswrapper[4687]: I0312 16:32:14.282824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerStarted","Data":"93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0"} Mar 12 16:32:14 crc kubenswrapper[4687]: I0312 16:32:14.310233 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9gdrl" podStartSLOduration=2.682269859 podStartE2EDuration="7.310213041s" podCreationTimestamp="2026-03-12 16:32:07 +0000 UTC" firstStartedPulling="2026-03-12 16:32:09.207258004 +0000 UTC m=+1778.171220358" lastFinishedPulling="2026-03-12 16:32:13.835201196 +0000 UTC m=+1782.799163540" observedRunningTime="2026-03-12 16:32:14.298006716 +0000 UTC m=+1783.261969070" watchObservedRunningTime="2026-03-12 16:32:14.310213041 +0000 UTC m=+1783.274175385" Mar 12 16:32:15 crc kubenswrapper[4687]: I0312 16:32:15.546805 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:15 crc kubenswrapper[4687]: I0312 16:32:15.547209 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:16 crc kubenswrapper[4687]: I0312 16:32:16.653968 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tr6wx" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="registry-server" probeResult="failure" output=< Mar 12 16:32:16 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:32:16 crc kubenswrapper[4687]: > Mar 12 16:32:17 crc kubenswrapper[4687]: I0312 16:32:17.852398 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:17 crc kubenswrapper[4687]: I0312 16:32:17.852846 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:18 crc kubenswrapper[4687]: I0312 16:32:18.392875 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 16:32:18 crc kubenswrapper[4687]: I0312 16:32:18.908795 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9gdrl" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="registry-server" probeResult="failure" output=< Mar 12 16:32:18 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:32:18 crc kubenswrapper[4687]: > Mar 12 16:32:25 crc kubenswrapper[4687]: I0312 16:32:25.640975 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:25 crc kubenswrapper[4687]: I0312 16:32:25.712267 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:25 crc kubenswrapper[4687]: I0312 16:32:25.733107 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:32:25 crc kubenswrapper[4687]: E0312 16:32:25.733576 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:32:26 crc kubenswrapper[4687]: I0312 16:32:26.342692 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6wx"] Mar 12 16:32:27 crc kubenswrapper[4687]: I0312 16:32:27.442537 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tr6wx" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="registry-server" containerID="cri-o://260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c" gracePeriod=2 Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.013830 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.174231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-utilities\") pod \"09d5574c-eb0a-4471-81eb-ba14039979c3\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.174355 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-catalog-content\") pod \"09d5574c-eb0a-4471-81eb-ba14039979c3\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.174512 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzhgs\" (UniqueName: \"kubernetes.io/projected/09d5574c-eb0a-4471-81eb-ba14039979c3-kube-api-access-rzhgs\") pod \"09d5574c-eb0a-4471-81eb-ba14039979c3\" (UID: \"09d5574c-eb0a-4471-81eb-ba14039979c3\") " Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.174922 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-utilities" (OuterVolumeSpecName: "utilities") pod "09d5574c-eb0a-4471-81eb-ba14039979c3" (UID: "09d5574c-eb0a-4471-81eb-ba14039979c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.175329 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.192288 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d5574c-eb0a-4471-81eb-ba14039979c3-kube-api-access-rzhgs" (OuterVolumeSpecName: "kube-api-access-rzhgs") pod "09d5574c-eb0a-4471-81eb-ba14039979c3" (UID: "09d5574c-eb0a-4471-81eb-ba14039979c3"). InnerVolumeSpecName "kube-api-access-rzhgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.199930 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09d5574c-eb0a-4471-81eb-ba14039979c3" (UID: "09d5574c-eb0a-4471-81eb-ba14039979c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.277338 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09d5574c-eb0a-4471-81eb-ba14039979c3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.277368 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzhgs\" (UniqueName: \"kubernetes.io/projected/09d5574c-eb0a-4471-81eb-ba14039979c3-kube-api-access-rzhgs\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.458558 4687 generic.go:334] "Generic (PLEG): container finished" podID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerID="260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c" exitCode=0 Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.458634 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tr6wx" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.458632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerDied","Data":"260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c"} Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.459634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tr6wx" event={"ID":"09d5574c-eb0a-4471-81eb-ba14039979c3","Type":"ContainerDied","Data":"75a048d820e4199bba885395a61fb1b1bbc043b1dd0113531983c327e0b11174"} Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.459663 4687 scope.go:117] "RemoveContainer" containerID="260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.496997 4687 scope.go:117] "RemoveContainer" containerID="7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.501355 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6wx"] Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.517948 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tr6wx"] Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.548458 4687 scope.go:117] "RemoveContainer" containerID="23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.593158 4687 scope.go:117] "RemoveContainer" containerID="260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c" Mar 12 16:32:28 crc kubenswrapper[4687]: E0312 16:32:28.593831 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c\": container with ID starting with 260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c not found: ID does not exist" containerID="260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.593883 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c"} err="failed to get container status \"260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c\": rpc error: code = NotFound desc = could not find container \"260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c\": container with ID starting with 260bce499fbb7dcc48e7ca87d5ff0a7b1397b0cdb7f6733971540b663f24195c not found: ID does not exist" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.593903 4687 scope.go:117] "RemoveContainer" containerID="7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8" Mar 12 16:32:28 crc kubenswrapper[4687]: E0312 16:32:28.594213 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8\": container with ID starting with 7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8 not found: ID does not exist" containerID="7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.594281 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8"} err="failed to get container status \"7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8\": rpc error: code = NotFound desc = could not find container \"7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8\": container with ID starting with 7e5a2f3e06c3504ce2e8192f7a06e54d842dc0e5750e5e0a085e16ffc006ffa8 not found: ID does not exist" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.594328 4687 scope.go:117] "RemoveContainer" containerID="23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e" Mar 12 16:32:28 crc kubenswrapper[4687]: E0312 16:32:28.594851 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e\": container with ID starting with 23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e not found: ID does not exist" containerID="23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.594885 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e"} err="failed to get container status \"23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e\": rpc error: code = NotFound desc = could not find container \"23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e\": container with ID starting with 23e51f3f92e75627fce8c3084d3cf5df711ed96f2bafe1f1afa9dc3836cdc14e not found: ID does not exist" Mar 12 16:32:28 crc kubenswrapper[4687]: I0312 16:32:28.907042 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9gdrl" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="registry-server" probeResult="failure" output=< Mar 12 16:32:28 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:32:28 crc kubenswrapper[4687]: > Mar 12 16:32:29 crc kubenswrapper[4687]: I0312 16:32:29.746824 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" path="/var/lib/kubelet/pods/09d5574c-eb0a-4471-81eb-ba14039979c3/volumes" Mar 12 16:32:29 crc kubenswrapper[4687]: I0312 16:32:29.795344 4687 scope.go:117] "RemoveContainer" containerID="1ec6a3b82a323a87dfab248865326c5a035e5f1581ada9e4164e13cf4c824a40" Mar 12 16:32:29 crc kubenswrapper[4687]: I0312 16:32:29.819763 4687 scope.go:117] "RemoveContainer" containerID="88dd2cc69ed4301625f753462469f2088b642b439fbf6b91b36f78a5945de6bd" Mar 12 16:32:29 crc kubenswrapper[4687]: I0312 16:32:29.858663 4687 scope.go:117] "RemoveContainer" containerID="9dbf1b3ccb798c3048e5095e91afcac8fbf2c71140dad299b009d951a2c43129" Mar 12 16:32:29 crc kubenswrapper[4687]: I0312 16:32:29.889277 4687 scope.go:117] "RemoveContainer" containerID="1f59f71b44a246f3734454130537a05c5a68c47777891361ecae2e70f494ab0a" Mar 12 16:32:29 crc kubenswrapper[4687]: I0312 16:32:29.953763 4687 scope.go:117] "RemoveContainer" containerID="f31810308c6128e83c2297f6b5fd09f6660da7129cbb5699a5bcdd5393731a2e" Mar 12 16:32:30 crc kubenswrapper[4687]: I0312 16:32:30.008051 4687 scope.go:117] "RemoveContainer" containerID="eddfb1a02ff7f1dba246f0a87f5928991af2a3aadb1f8d130e7bd974d02ccdf1" Mar 12 16:32:37 crc kubenswrapper[4687]: I0312 16:32:37.733710 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:32:37 crc kubenswrapper[4687]: E0312 16:32:37.734894 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:32:37 crc kubenswrapper[4687]: I0312 16:32:37.920713 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:37 crc kubenswrapper[4687]: I0312 16:32:37.978999 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:38 crc kubenswrapper[4687]: I0312 16:32:38.729513 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9gdrl"] Mar 12 16:32:39 crc kubenswrapper[4687]: I0312 16:32:39.579542 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9gdrl" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="registry-server" containerID="cri-o://93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0" gracePeriod=2 Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.122643 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.210508 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-utilities\") pod \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.210732 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-catalog-content\") pod \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.210761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl9sd\" (UniqueName: \"kubernetes.io/projected/3f5d94a3-5d8f-4466-abc6-837ea78cb762-kube-api-access-cl9sd\") pod \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\" (UID: \"3f5d94a3-5d8f-4466-abc6-837ea78cb762\") " Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.211327 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-utilities" (OuterVolumeSpecName: "utilities") pod "3f5d94a3-5d8f-4466-abc6-837ea78cb762" (UID: "3f5d94a3-5d8f-4466-abc6-837ea78cb762"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.211827 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.220418 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5d94a3-5d8f-4466-abc6-837ea78cb762-kube-api-access-cl9sd" (OuterVolumeSpecName: "kube-api-access-cl9sd") pod "3f5d94a3-5d8f-4466-abc6-837ea78cb762" (UID: "3f5d94a3-5d8f-4466-abc6-837ea78cb762"). InnerVolumeSpecName "kube-api-access-cl9sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.262215 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f5d94a3-5d8f-4466-abc6-837ea78cb762" (UID: "3f5d94a3-5d8f-4466-abc6-837ea78cb762"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.313669 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5d94a3-5d8f-4466-abc6-837ea78cb762-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.313708 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl9sd\" (UniqueName: \"kubernetes.io/projected/3f5d94a3-5d8f-4466-abc6-837ea78cb762-kube-api-access-cl9sd\") on node \"crc\" DevicePath \"\"" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.593298 4687 generic.go:334] "Generic (PLEG): container finished" podID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerID="93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0" exitCode=0 Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.593337 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerDied","Data":"93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0"} Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.593415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdrl" event={"ID":"3f5d94a3-5d8f-4466-abc6-837ea78cb762","Type":"ContainerDied","Data":"103f95ae2208ae234dc93552618a8666e437b990810d398d313f86f797be0834"} Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.593443 4687 scope.go:117] "RemoveContainer" containerID="93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.593350 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gdrl" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.624037 4687 scope.go:117] "RemoveContainer" containerID="65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.631350 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9gdrl"] Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.645697 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9gdrl"] Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.657709 4687 scope.go:117] "RemoveContainer" containerID="ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.715954 4687 scope.go:117] "RemoveContainer" containerID="93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0" Mar 12 16:32:40 crc kubenswrapper[4687]: E0312 16:32:40.716546 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0\": container with ID starting with 93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0 not found: ID does not exist" containerID="93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.716606 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0"} err="failed to get container status \"93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0\": rpc error: code = NotFound desc = could not find container \"93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0\": container with ID starting with 93758eace3cd55e0c0703c96a429796afbfc701c43afcdba213e42777872cef0 not found: ID does not exist" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.716644 4687 scope.go:117] "RemoveContainer" containerID="65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b" Mar 12 16:32:40 crc kubenswrapper[4687]: E0312 16:32:40.717051 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b\": container with ID starting with 65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b not found: ID does not exist" containerID="65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.717084 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b"} err="failed to get container status \"65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b\": rpc error: code = NotFound desc = could not find container \"65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b\": container with ID starting with 65d8d63f58c7e4d67a206d82294a323c911940d9e0c80fa1549c0db06eebfa3b not found: ID does not exist" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.717104 4687 scope.go:117] "RemoveContainer" containerID="ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d" Mar 12 16:32:40 crc kubenswrapper[4687]: E0312 16:32:40.717301 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d\": container with ID starting with ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d not found: ID does not exist" containerID="ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d" Mar 12 16:32:40 crc kubenswrapper[4687]: I0312 16:32:40.717319 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d"} err="failed to get container status \"ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d\": rpc error: code = NotFound desc = could not find container \"ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d\": container with ID starting with ec62d50860a41a8481f56f598781149013d8747644920181d9fd81c6d889374d not found: ID does not exist" Mar 12 16:32:41 crc kubenswrapper[4687]: I0312 16:32:41.750194 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" path="/var/lib/kubelet/pods/3f5d94a3-5d8f-4466-abc6-837ea78cb762/volumes" Mar 12 16:32:49 crc kubenswrapper[4687]: I0312 16:32:49.734150 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:32:49 crc kubenswrapper[4687]: E0312 16:32:49.735960 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:33:00 crc kubenswrapper[4687]: I0312 16:33:00.734074 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:33:00 crc kubenswrapper[4687]: E0312 16:33:00.734902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:33:00 crc kubenswrapper[4687]: I0312 16:33:00.830104 4687 generic.go:334] "Generic (PLEG): container finished" podID="e5e2e312-1292-454f-8b27-6a6a43fe4a1e" containerID="b816ddb30277d6afe8038e7164fb2a6de462da8a3892ac751cb9765087aaa5f7" exitCode=0 Mar 12 16:33:00 crc kubenswrapper[4687]: I0312 16:33:00.830148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" event={"ID":"e5e2e312-1292-454f-8b27-6a6a43fe4a1e","Type":"ContainerDied","Data":"b816ddb30277d6afe8038e7164fb2a6de462da8a3892ac751cb9765087aaa5f7"} Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.296863 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.376478 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx2dx\" (UniqueName: \"kubernetes.io/projected/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-kube-api-access-xx2dx\") pod \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.376659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-bootstrap-combined-ca-bundle\") pod \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.376695 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-inventory\") pod \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.376786 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-ssh-key-openstack-edpm-ipam\") pod \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\" (UID: \"e5e2e312-1292-454f-8b27-6a6a43fe4a1e\") " Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.382890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e5e2e312-1292-454f-8b27-6a6a43fe4a1e" (UID: "e5e2e312-1292-454f-8b27-6a6a43fe4a1e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.383006 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-kube-api-access-xx2dx" (OuterVolumeSpecName: "kube-api-access-xx2dx") pod "e5e2e312-1292-454f-8b27-6a6a43fe4a1e" (UID: "e5e2e312-1292-454f-8b27-6a6a43fe4a1e"). InnerVolumeSpecName "kube-api-access-xx2dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.407592 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5e2e312-1292-454f-8b27-6a6a43fe4a1e" (UID: "e5e2e312-1292-454f-8b27-6a6a43fe4a1e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.410096 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-inventory" (OuterVolumeSpecName: "inventory") pod "e5e2e312-1292-454f-8b27-6a6a43fe4a1e" (UID: "e5e2e312-1292-454f-8b27-6a6a43fe4a1e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.480507 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx2dx\" (UniqueName: \"kubernetes.io/projected/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-kube-api-access-xx2dx\") on node \"crc\" DevicePath \"\"" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.480559 4687 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.480571 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.480580 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5e2e312-1292-454f-8b27-6a6a43fe4a1e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.862975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" event={"ID":"e5e2e312-1292-454f-8b27-6a6a43fe4a1e","Type":"ContainerDied","Data":"606ecb5991955481d39030f595bc5fcb1359bb37e0a615d270512f827611ba5d"} Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.863018 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="606ecb5991955481d39030f595bc5fcb1359bb37e0a615d270512f827611ba5d" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.863085 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.959494 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4"] Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960047 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="extract-content" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960067 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="extract-content" Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960090 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="registry-server" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960098 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="registry-server" Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960118 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="extract-content" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960126 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="extract-content" Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960142 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="registry-server" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960152 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="registry-server" Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960172 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="extract-utilities" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960179 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="extract-utilities" Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960189 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e2e312-1292-454f-8b27-6a6a43fe4a1e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960246 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e2e312-1292-454f-8b27-6a6a43fe4a1e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 16:33:02 crc kubenswrapper[4687]: E0312 16:33:02.960260 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="extract-utilities" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960267 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="extract-utilities" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960702 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5d94a3-5d8f-4466-abc6-837ea78cb762" containerName="registry-server" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960756 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e2e312-1292-454f-8b27-6a6a43fe4a1e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.960787 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d5574c-eb0a-4471-81eb-ba14039979c3" containerName="registry-server" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.973465 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.973979 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4"] Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.975560 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.975905 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.976230 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:33:02 crc kubenswrapper[4687]: I0312 16:33:02.976676 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.020833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrz6\" (UniqueName: \"kubernetes.io/projected/c24fada5-9a93-4300-85b9-19da711555dc-kube-api-access-tsrz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.020950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.021021 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.129818 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrz6\" (UniqueName: \"kubernetes.io/projected/c24fada5-9a93-4300-85b9-19da711555dc-kube-api-access-tsrz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.130004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.130097 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.143868 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.145732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.145942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrz6\" (UniqueName: \"kubernetes.io/projected/c24fada5-9a93-4300-85b9-19da711555dc-kube-api-access-tsrz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.300735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:33:03 crc kubenswrapper[4687]: I0312 16:33:03.908685 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4"] Mar 12 16:33:04 crc kubenswrapper[4687]: I0312 16:33:04.914457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" event={"ID":"c24fada5-9a93-4300-85b9-19da711555dc","Type":"ContainerStarted","Data":"e9cf9b6ace6ff43da7c59b60ba5040ad5d267673d02927dab991f9a4133811a7"} Mar 12 16:33:05 crc kubenswrapper[4687]: I0312 16:33:05.927542 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" event={"ID":"c24fada5-9a93-4300-85b9-19da711555dc","Type":"ContainerStarted","Data":"01d78d164d03fa5e32ae322036552d5ede775cb5da92dcb49162961d1ba4282b"} Mar 12 16:33:05 crc kubenswrapper[4687]: I0312 16:33:05.951751 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" podStartSLOduration=3.081861248 podStartE2EDuration="3.951730166s" podCreationTimestamp="2026-03-12 16:33:02 +0000 UTC" firstStartedPulling="2026-03-12 16:33:03.907727738 +0000 UTC m=+1832.871690082" lastFinishedPulling="2026-03-12 16:33:04.777596646 +0000 UTC m=+1833.741559000" observedRunningTime="2026-03-12 16:33:05.943213383 +0000 UTC m=+1834.907175727" watchObservedRunningTime="2026-03-12 16:33:05.951730166 +0000 UTC m=+1834.915692510" Mar 12 16:33:13 crc kubenswrapper[4687]: I0312 16:33:13.733568 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:33:13 crc kubenswrapper[4687]: E0312 16:33:13.734524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:33:28 crc kubenswrapper[4687]: I0312 16:33:28.734313 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:33:29 crc kubenswrapper[4687]: I0312 16:33:29.191638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"d9dc6392cb7dbf829f54eade1ebdbf1980615c87a7f18a1a2fbd8d0c0a711640"} Mar 12 16:33:30 crc kubenswrapper[4687]: I0312 16:33:30.150776 4687 scope.go:117] "RemoveContainer" containerID="273b0090705c7d23b745fc634676d50822824a34397c5540a0467cace64653ef" Mar 12 16:33:30 crc kubenswrapper[4687]: I0312 16:33:30.203126 4687 scope.go:117] "RemoveContainer" containerID="e827a8608507eb8102c4284e623bfd128bdfd0c991f6b771d830023b00310b0f" Mar 12 16:33:30 crc kubenswrapper[4687]: I0312 16:33:30.249581 4687 scope.go:117] "RemoveContainer" containerID="639f8ed1837a3b804abbcf73651b34d8b826a2e5d6b1c680d1e8ae7c7691a712" Mar 12 16:33:48 crc kubenswrapper[4687]: I0312 16:33:48.056966 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5dc0-account-create-update-mwgwc"] Mar 12 16:33:48 crc kubenswrapper[4687]: I0312 16:33:48.080601 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5dc0-account-create-update-mwgwc"] Mar 12 16:33:48 crc kubenswrapper[4687]: I0312 16:33:48.093887 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mx5f2"] Mar 12 16:33:48 crc kubenswrapper[4687]: I0312 16:33:48.106287 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3eed-account-create-update-9lc6t"] Mar 12 16:33:48 crc kubenswrapper[4687]: I0312 16:33:48.118980 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mx5f2"] Mar 12 16:33:48 crc kubenswrapper[4687]: I0312 16:33:48.129968 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3eed-account-create-update-9lc6t"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.032139 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-d8vg2"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.048545 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vv8qj"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.091026 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d5f9-account-create-update-4s62g"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.106959 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vv8qj"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.118455 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-d8vg2"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.129158 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d5f9-account-create-update-4s62g"] Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.748740 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39486168-4d75-4418-a2d8-576c03bb7743" path="/var/lib/kubelet/pods/39486168-4d75-4418-a2d8-576c03bb7743/volumes" Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.753385 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a6e134-b880-4c0b-af7a-f14d5ecaca30" path="/var/lib/kubelet/pods/59a6e134-b880-4c0b-af7a-f14d5ecaca30/volumes" Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.754543 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c132350-8fd3-41bb-90a9-cee0bd08112f" path="/var/lib/kubelet/pods/7c132350-8fd3-41bb-90a9-cee0bd08112f/volumes" Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.756593 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b" path="/var/lib/kubelet/pods/84221c3f-e3cb-4784-b4ed-fd5c00e9fc9b/volumes" Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.760185 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1799451-d37d-4851-93e4-cb394f3d1739" path="/var/lib/kubelet/pods/a1799451-d37d-4851-93e4-cb394f3d1739/volumes" Mar 12 16:33:49 crc kubenswrapper[4687]: I0312 16:33:49.762940 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7869f58-eb4c-49a5-addf-c157afcb109b" path="/var/lib/kubelet/pods/d7869f58-eb4c-49a5-addf-c157afcb109b/volumes" Mar 12 16:33:52 crc kubenswrapper[4687]: I0312 16:33:52.046410 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-9858-account-create-update-95r55"] Mar 12 16:33:52 crc kubenswrapper[4687]: I0312 16:33:52.056244 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7zgtv"] Mar 12 16:33:52 crc kubenswrapper[4687]: I0312 16:33:52.070443 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-9858-account-create-update-95r55"] Mar 12 16:33:52 crc kubenswrapper[4687]: I0312 16:33:52.083602 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7zgtv"] Mar 12 16:33:53 crc kubenswrapper[4687]: I0312 16:33:53.745010 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d038a44f-695f-49b3-8878-f7bee07fd444" path="/var/lib/kubelet/pods/d038a44f-695f-49b3-8878-f7bee07fd444/volumes" Mar 12 16:33:53 crc kubenswrapper[4687]: I0312 16:33:53.747088 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11478c5-fa5a-4676-a297-ca9b2db901e6" path="/var/lib/kubelet/pods/e11478c5-fa5a-4676-a297-ca9b2db901e6/volumes" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.183297 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555554-q5mmh"] Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.186500 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.190200 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.190554 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.190701 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.202017 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555554-q5mmh"] Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.344740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6ljk\" (UniqueName: \"kubernetes.io/projected/78ee8c3f-eebe-42fe-a34e-8e434b1b1444-kube-api-access-s6ljk\") pod \"auto-csr-approver-29555554-q5mmh\" (UID: \"78ee8c3f-eebe-42fe-a34e-8e434b1b1444\") " pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.448290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6ljk\" (UniqueName: \"kubernetes.io/projected/78ee8c3f-eebe-42fe-a34e-8e434b1b1444-kube-api-access-s6ljk\") pod \"auto-csr-approver-29555554-q5mmh\" (UID: \"78ee8c3f-eebe-42fe-a34e-8e434b1b1444\") " pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.469440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6ljk\" (UniqueName: \"kubernetes.io/projected/78ee8c3f-eebe-42fe-a34e-8e434b1b1444-kube-api-access-s6ljk\") pod \"auto-csr-approver-29555554-q5mmh\" (UID: \"78ee8c3f-eebe-42fe-a34e-8e434b1b1444\") " pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:00 crc kubenswrapper[4687]: I0312 16:34:00.506624 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:01 crc kubenswrapper[4687]: I0312 16:34:01.005319 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555554-q5mmh"] Mar 12 16:34:01 crc kubenswrapper[4687]: I0312 16:34:01.589105 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" event={"ID":"78ee8c3f-eebe-42fe-a34e-8e434b1b1444","Type":"ContainerStarted","Data":"40a3940099345c9173ee20911fba31b464136c91ddd219bf4b5852449ebc456e"} Mar 12 16:34:02 crc kubenswrapper[4687]: I0312 16:34:02.044251 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx"] Mar 12 16:34:02 crc kubenswrapper[4687]: I0312 16:34:02.054841 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jkbbx"] Mar 12 16:34:03 crc kubenswrapper[4687]: I0312 16:34:03.957470 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62" path="/var/lib/kubelet/pods/4d87d3a2-3fe6-4d8b-9fc5-08bfb4ee5c62/volumes" Mar 12 16:34:03 crc kubenswrapper[4687]: I0312 16:34:03.970961 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" event={"ID":"78ee8c3f-eebe-42fe-a34e-8e434b1b1444","Type":"ContainerStarted","Data":"74a0ace896b60d0c38999af8846eaf277b205593eea3f218ddc561c4703c5236"} Mar 12 16:34:04 crc kubenswrapper[4687]: I0312 16:34:04.002387 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" podStartSLOduration=2.44420411 podStartE2EDuration="4.002350115s" podCreationTimestamp="2026-03-12 16:34:00 +0000 UTC" firstStartedPulling="2026-03-12 16:34:01.015124281 +0000 UTC m=+1889.979086635" lastFinishedPulling="2026-03-12 16:34:02.573270296 +0000 UTC m=+1891.537232640" observedRunningTime="2026-03-12 16:34:03.982502112 +0000 UTC m=+1892.946464466" watchObservedRunningTime="2026-03-12 16:34:04.002350115 +0000 UTC m=+1892.966312459" Mar 12 16:34:04 crc kubenswrapper[4687]: I0312 16:34:04.982426 4687 generic.go:334] "Generic (PLEG): container finished" podID="78ee8c3f-eebe-42fe-a34e-8e434b1b1444" containerID="74a0ace896b60d0c38999af8846eaf277b205593eea3f218ddc561c4703c5236" exitCode=0 Mar 12 16:34:04 crc kubenswrapper[4687]: I0312 16:34:04.982478 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" event={"ID":"78ee8c3f-eebe-42fe-a34e-8e434b1b1444","Type":"ContainerDied","Data":"74a0ace896b60d0c38999af8846eaf277b205593eea3f218ddc561c4703c5236"} Mar 12 16:34:06 crc kubenswrapper[4687]: I0312 16:34:06.495740 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:06 crc kubenswrapper[4687]: I0312 16:34:06.575779 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6ljk\" (UniqueName: \"kubernetes.io/projected/78ee8c3f-eebe-42fe-a34e-8e434b1b1444-kube-api-access-s6ljk\") pod \"78ee8c3f-eebe-42fe-a34e-8e434b1b1444\" (UID: \"78ee8c3f-eebe-42fe-a34e-8e434b1b1444\") " Mar 12 16:34:06 crc kubenswrapper[4687]: I0312 16:34:06.584646 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ee8c3f-eebe-42fe-a34e-8e434b1b1444-kube-api-access-s6ljk" (OuterVolumeSpecName: "kube-api-access-s6ljk") pod "78ee8c3f-eebe-42fe-a34e-8e434b1b1444" (UID: "78ee8c3f-eebe-42fe-a34e-8e434b1b1444"). InnerVolumeSpecName "kube-api-access-s6ljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:34:06 crc kubenswrapper[4687]: I0312 16:34:06.679612 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6ljk\" (UniqueName: \"kubernetes.io/projected/78ee8c3f-eebe-42fe-a34e-8e434b1b1444-kube-api-access-s6ljk\") on node \"crc\" DevicePath \"\"" Mar 12 16:34:07 crc kubenswrapper[4687]: I0312 16:34:07.006153 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" event={"ID":"78ee8c3f-eebe-42fe-a34e-8e434b1b1444","Type":"ContainerDied","Data":"40a3940099345c9173ee20911fba31b464136c91ddd219bf4b5852449ebc456e"} Mar 12 16:34:07 crc kubenswrapper[4687]: I0312 16:34:07.006191 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a3940099345c9173ee20911fba31b464136c91ddd219bf4b5852449ebc456e" Mar 12 16:34:07 crc kubenswrapper[4687]: I0312 16:34:07.006241 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555554-q5mmh" Mar 12 16:34:07 crc kubenswrapper[4687]: I0312 16:34:07.052102 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555548-cjcvk"] Mar 12 16:34:07 crc kubenswrapper[4687]: I0312 16:34:07.065250 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555548-cjcvk"] Mar 12 16:34:07 crc kubenswrapper[4687]: I0312 16:34:07.746818 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6932a099-9587-4b14-925a-433e077f505b" path="/var/lib/kubelet/pods/6932a099-9587-4b14-925a-433e077f505b/volumes" Mar 12 16:34:13 crc kubenswrapper[4687]: I0312 16:34:13.037164 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-f8f1-account-create-update-tp6zx"] Mar 12 16:34:13 crc kubenswrapper[4687]: I0312 16:34:13.051496 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-89qs5"] Mar 12 16:34:13 crc kubenswrapper[4687]: I0312 16:34:13.064751 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-f8f1-account-create-update-tp6zx"] Mar 12 16:34:13 crc kubenswrapper[4687]: I0312 16:34:13.081149 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-89qs5"] Mar 12 16:34:13 crc kubenswrapper[4687]: I0312 16:34:13.753299 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da6dedc-beb0-4108-839d-f234a2cf3caf" path="/var/lib/kubelet/pods/0da6dedc-beb0-4108-839d-f234a2cf3caf/volumes" Mar 12 16:34:13 crc kubenswrapper[4687]: I0312 16:34:13.754272 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cd364a-718d-4924-91ec-c98402368e0c" path="/var/lib/kubelet/pods/30cd364a-718d-4924-91ec-c98402368e0c/volumes" Mar 12 16:34:24 crc kubenswrapper[4687]: I0312 16:34:24.053797 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9v52n"] Mar 12 16:34:24 crc kubenswrapper[4687]: I0312 16:34:24.070179 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9v52n"] Mar 12 16:34:25 crc kubenswrapper[4687]: I0312 16:34:25.751175 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc" path="/var/lib/kubelet/pods/eee0bc3f-1e9b-4d05-bdcc-747f6fbe96bc/volumes" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.340319 4687 scope.go:117] "RemoveContainer" containerID="83e9d2efe9cb1d4a2040ef0d9d31be5c6990d8b674506805bc8e90a6c5a0cede" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.366999 4687 scope.go:117] "RemoveContainer" containerID="4d3c6d3f811b2ab6c359bcc5540dd7bc5178e3addaed06480092f8f35c00a5b0" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.430300 4687 scope.go:117] "RemoveContainer" containerID="d1f12555ea0196703321946d370ece5fd948a2a83b16cc82cd855fdeb8eafaad" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.519155 4687 scope.go:117] "RemoveContainer" containerID="34520867e9b3da785ebe8fdfd742448d9cd815348ccf59cb11f77414ece37e47" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.574177 4687 scope.go:117] "RemoveContainer" containerID="613f2c31eaa4b8533a0b484844a67ee2c8688b1222e88e253dd8c66f1f372f4c" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.619795 4687 scope.go:117] "RemoveContainer" containerID="7172ec7e3f01eeb7591852362a5baee513b172d794fc0a9d1fba680584d098f7" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.673681 4687 scope.go:117] "RemoveContainer" containerID="a19c1c41061ef6a2567c3aa660b9806c7cb93025766138dc288b3b3ed10a2f9f" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.707960 4687 scope.go:117] "RemoveContainer" containerID="d2848347e043a5421edfa736791e6a0eb09327b5d50332ce96140841a35161a8" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.738728 4687 scope.go:117] "RemoveContainer" containerID="084532cf29d593d9392bb6b0b721e6aa46235245effdbbf003056c27ac60cca9" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.762299 4687 scope.go:117] "RemoveContainer" containerID="4dddb16df8822f7c3c92115d4a93f30b427ba29980676d0d9012bd6624bb432e" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.789052 4687 scope.go:117] "RemoveContainer" containerID="afc09065b9993d82b70db40071051c5e0060c6efcefde26794a3ad23e21f4e95" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.814221 4687 scope.go:117] "RemoveContainer" containerID="25224d9f872c79f9893a29df95451c820a490b86a01c20a6d5ec249dc8c1ee0a" Mar 12 16:34:30 crc kubenswrapper[4687]: I0312 16:34:30.838100 4687 scope.go:117] "RemoveContainer" containerID="b768375832f3ff94be486b7e2e9563a7c9f17008795cd518b18c8260563de2a9" Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.068829 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-99aa-account-create-update-pqqnd"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.082279 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fpswn"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.094845 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-33fc-account-create-update-fq2hv"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.108473 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-62ec-account-create-update-9jt56"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.123516 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-m94x4"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.134541 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-aca1-account-create-update-nxln9"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.145247 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6dk8w"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.155812 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-99aa-account-create-update-pqqnd"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.165764 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fpswn"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.176398 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-p7mnw"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.186971 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-aca1-account-create-update-nxln9"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.196865 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-m94x4"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.207068 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-62ec-account-create-update-9jt56"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.219103 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-33fc-account-create-update-fq2hv"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.229947 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6dk8w"] Mar 12 16:34:38 crc kubenswrapper[4687]: I0312 16:34:38.241840 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-p7mnw"] Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.757746 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056e0820-c20f-4aae-9019-5215b548730d" path="/var/lib/kubelet/pods/056e0820-c20f-4aae-9019-5215b548730d/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.760124 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec53b11-d489-4b3d-9ab3-f71837d60140" path="/var/lib/kubelet/pods/2ec53b11-d489-4b3d-9ab3-f71837d60140/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.762009 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aab1c2f-2474-4289-a7f9-f95918c43526" path="/var/lib/kubelet/pods/3aab1c2f-2474-4289-a7f9-f95918c43526/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.763540 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e99f51e-e7c3-41f5-b23e-4b044485bccf" path="/var/lib/kubelet/pods/9e99f51e-e7c3-41f5-b23e-4b044485bccf/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.768873 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d80001-4027-4204-87ea-74a277bbaefe" path="/var/lib/kubelet/pods/b2d80001-4027-4204-87ea-74a277bbaefe/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.772744 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc214419-840c-47c7-ae24-0fa13e511604" path="/var/lib/kubelet/pods/bc214419-840c-47c7-ae24-0fa13e511604/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.774008 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb64ffe-b4eb-4547-9abc-0a332ebfb74d" path="/var/lib/kubelet/pods/cbb64ffe-b4eb-4547-9abc-0a332ebfb74d/volumes" Mar 12 16:34:39 crc kubenswrapper[4687]: I0312 16:34:39.777062 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa45e688-7b23-42a0-af5d-36c3a074344f" path="/var/lib/kubelet/pods/fa45e688-7b23-42a0-af5d-36c3a074344f/volumes" Mar 12 16:34:42 crc kubenswrapper[4687]: I0312 16:34:42.041674 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-82n95"] Mar 12 16:34:42 crc kubenswrapper[4687]: I0312 16:34:42.053374 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-82n95"] Mar 12 16:34:43 crc kubenswrapper[4687]: I0312 16:34:43.750045 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b" path="/var/lib/kubelet/pods/afb2bc74-2552-44cf-9bb8-2b6ccc81eb9b/volumes" Mar 12 16:34:52 crc kubenswrapper[4687]: I0312 16:34:52.513264 4687 generic.go:334] "Generic (PLEG): container finished" podID="c24fada5-9a93-4300-85b9-19da711555dc" containerID="01d78d164d03fa5e32ae322036552d5ede775cb5da92dcb49162961d1ba4282b" exitCode=0 Mar 12 16:34:52 crc kubenswrapper[4687]: I0312 16:34:52.513442 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" event={"ID":"c24fada5-9a93-4300-85b9-19da711555dc","Type":"ContainerDied","Data":"01d78d164d03fa5e32ae322036552d5ede775cb5da92dcb49162961d1ba4282b"} Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.073289 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.144758 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-inventory\") pod \"c24fada5-9a93-4300-85b9-19da711555dc\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.145045 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsrz6\" (UniqueName: \"kubernetes.io/projected/c24fada5-9a93-4300-85b9-19da711555dc-kube-api-access-tsrz6\") pod \"c24fada5-9a93-4300-85b9-19da711555dc\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.145251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-ssh-key-openstack-edpm-ipam\") pod \"c24fada5-9a93-4300-85b9-19da711555dc\" (UID: \"c24fada5-9a93-4300-85b9-19da711555dc\") " Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.158869 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24fada5-9a93-4300-85b9-19da711555dc-kube-api-access-tsrz6" (OuterVolumeSpecName: "kube-api-access-tsrz6") pod "c24fada5-9a93-4300-85b9-19da711555dc" (UID: "c24fada5-9a93-4300-85b9-19da711555dc"). InnerVolumeSpecName "kube-api-access-tsrz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.182406 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-inventory" (OuterVolumeSpecName: "inventory") pod "c24fada5-9a93-4300-85b9-19da711555dc" (UID: "c24fada5-9a93-4300-85b9-19da711555dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.189601 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c24fada5-9a93-4300-85b9-19da711555dc" (UID: "c24fada5-9a93-4300-85b9-19da711555dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.248147 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.248185 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsrz6\" (UniqueName: \"kubernetes.io/projected/c24fada5-9a93-4300-85b9-19da711555dc-kube-api-access-tsrz6\") on node \"crc\" DevicePath \"\"" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.248199 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c24fada5-9a93-4300-85b9-19da711555dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.541762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" event={"ID":"c24fada5-9a93-4300-85b9-19da711555dc","Type":"ContainerDied","Data":"e9cf9b6ace6ff43da7c59b60ba5040ad5d267673d02927dab991f9a4133811a7"} Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.541810 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9cf9b6ace6ff43da7c59b60ba5040ad5d267673d02927dab991f9a4133811a7" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.541909 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.620765 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms"] Mar 12 16:34:54 crc kubenswrapper[4687]: E0312 16:34:54.621236 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ee8c3f-eebe-42fe-a34e-8e434b1b1444" containerName="oc" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.621252 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ee8c3f-eebe-42fe-a34e-8e434b1b1444" containerName="oc" Mar 12 16:34:54 crc kubenswrapper[4687]: E0312 16:34:54.621284 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24fada5-9a93-4300-85b9-19da711555dc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.621293 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24fada5-9a93-4300-85b9-19da711555dc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.621546 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24fada5-9a93-4300-85b9-19da711555dc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.621567 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ee8c3f-eebe-42fe-a34e-8e434b1b1444" containerName="oc" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.622327 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.625706 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.625895 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.626368 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.627039 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.632845 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms"] Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.759137 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.759595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.759804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwqw\" (UniqueName: \"kubernetes.io/projected/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-kube-api-access-5bwqw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.861606 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.862728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwqw\" (UniqueName: \"kubernetes.io/projected/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-kube-api-access-5bwqw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.863116 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.865999 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.873003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.910753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwqw\" (UniqueName: \"kubernetes.io/projected/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-kube-api-access-5bwqw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xfqms\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:54 crc kubenswrapper[4687]: I0312 16:34:54.949596 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:34:55 crc kubenswrapper[4687]: I0312 16:34:55.487629 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms"] Mar 12 16:34:55 crc kubenswrapper[4687]: I0312 16:34:55.553939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" event={"ID":"c2a0d336-cce5-4c55-a8c7-7d018a0131ed","Type":"ContainerStarted","Data":"7a9d7b48bf17187fbe4f4b95f8b73194c92e1f0f20cb40202d6af5eb9f920046"} Mar 12 16:34:56 crc kubenswrapper[4687]: I0312 16:34:56.564383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" event={"ID":"c2a0d336-cce5-4c55-a8c7-7d018a0131ed","Type":"ContainerStarted","Data":"29c786becf0ae0306b2f6ef00a3c8dfadab91dd1c6566ed7d7981e4df62b9ed6"} Mar 12 16:34:57 crc kubenswrapper[4687]: I0312 16:34:57.607273 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" podStartSLOduration=2.919068524 podStartE2EDuration="3.607254457s" podCreationTimestamp="2026-03-12 16:34:54 +0000 UTC" firstStartedPulling="2026-03-12 16:34:55.491965817 +0000 UTC m=+1944.455928161" lastFinishedPulling="2026-03-12 16:34:56.18015175 +0000 UTC m=+1945.144114094" observedRunningTime="2026-03-12 16:34:57.596690368 +0000 UTC m=+1946.560652712" watchObservedRunningTime="2026-03-12 16:34:57.607254457 +0000 UTC m=+1946.571216801" Mar 12 16:35:12 crc kubenswrapper[4687]: I0312 16:35:12.085811 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jqpm6"] Mar 12 16:35:12 crc kubenswrapper[4687]: I0312 16:35:12.099409 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jqpm6"] Mar 12 16:35:13 crc kubenswrapper[4687]: I0312 16:35:13.765947 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda73682-741c-41fd-80be-97995f73b1e1" path="/var/lib/kubelet/pods/eda73682-741c-41fd-80be-97995f73b1e1/volumes" Mar 12 16:35:26 crc kubenswrapper[4687]: I0312 16:35:26.033079 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5ptv6"] Mar 12 16:35:26 crc kubenswrapper[4687]: I0312 16:35:26.049812 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5ptv6"] Mar 12 16:35:26 crc kubenswrapper[4687]: I0312 16:35:26.062141 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bqj4x"] Mar 12 16:35:26 crc kubenswrapper[4687]: I0312 16:35:26.072074 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bqj4x"] Mar 12 16:35:26 crc kubenswrapper[4687]: I0312 16:35:26.083805 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v655j"] Mar 12 16:35:26 crc kubenswrapper[4687]: I0312 16:35:26.094655 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v655j"] Mar 12 16:35:27 crc kubenswrapper[4687]: I0312 16:35:27.763701 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163922be-91e7-4655-8802-b92c4699bad8" path="/var/lib/kubelet/pods/163922be-91e7-4655-8802-b92c4699bad8/volumes" Mar 12 16:35:27 crc kubenswrapper[4687]: I0312 16:35:27.765888 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5649bea8-d269-4e13-bcf9-8e58f4cdb132" path="/var/lib/kubelet/pods/5649bea8-d269-4e13-bcf9-8e58f4cdb132/volumes" Mar 12 16:35:27 crc kubenswrapper[4687]: I0312 16:35:27.766799 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8483f386-39e7-45eb-bd05-02dcdce7677f" path="/var/lib/kubelet/pods/8483f386-39e7-45eb-bd05-02dcdce7677f/volumes" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.152649 4687 scope.go:117] "RemoveContainer" containerID="da01c485fd9632cd3eb4b12fffee261dd77fb23a9cb2c986d96c52f771d323c9" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.176228 4687 scope.go:117] "RemoveContainer" containerID="dbfdc18c61b8236292443d6e9b81a4436b38f6691c04f18cbfa7ee77fc877cc0" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.243130 4687 scope.go:117] "RemoveContainer" containerID="19f85534d613ff5e57b0fed25c4e2a8f3a3e544cad06c4bf656a2a67ed95cdb2" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.308129 4687 scope.go:117] "RemoveContainer" containerID="626d5c30dbc033cf337b7a7add3de1774d786b6dbb36ee9e870eaf201c7b4a96" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.376907 4687 scope.go:117] "RemoveContainer" containerID="6f286c784a6abbac6e3f98dced0e5fd35d18f8d04853341d516aa54c0c3ec433" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.415072 4687 scope.go:117] "RemoveContainer" containerID="2f8ae45f6982ec91c6cc8039791ca6005afc026b46b72d50bc0ad0474f10ece6" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.473132 4687 scope.go:117] "RemoveContainer" containerID="706ffe0ee0580ee5c68a799ee5b454a588dd0edbb5186508b697c381e8d06238" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.493281 4687 scope.go:117] "RemoveContainer" containerID="1a32a2745633e102a2fc533b305c76e40c50a6a5701fa9851044ecaff05ad167" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.514543 4687 scope.go:117] "RemoveContainer" containerID="ad44f0c17e33ffd3791ef0cb799856fec0c78bb723e35b41ab738702748a305c" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.545962 4687 scope.go:117] "RemoveContainer" containerID="68f9290961698fcc142d1c9d4a3d15b482223729176554885285d7fb51dafd38" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.564940 4687 scope.go:117] "RemoveContainer" containerID="f1179a4d14d922ef89c4055b62016f2f0a74d9c67ef31b590a51caf5002c3005" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.590624 4687 scope.go:117] "RemoveContainer" containerID="f6f2e753d78795ac77de8689d087d17dc6cebbb633cabeb277e66e7a2a015846" Mar 12 16:35:31 crc kubenswrapper[4687]: I0312 16:35:31.610149 4687 scope.go:117] "RemoveContainer" containerID="8a64a5f454239ed79cc9a4dcaf04d01f5efb59cde19c9a99bca8ac98dcc26034" Mar 12 16:35:42 crc kubenswrapper[4687]: I0312 16:35:42.051851 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9mcqx"] Mar 12 16:35:42 crc kubenswrapper[4687]: I0312 16:35:42.062990 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9mcqx"] Mar 12 16:35:43 crc kubenswrapper[4687]: I0312 16:35:43.748126 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de14ef1d-d5a7-490a-a522-8d4cf39989b4" path="/var/lib/kubelet/pods/de14ef1d-d5a7-490a-a522-8d4cf39989b4/volumes" Mar 12 16:35:44 crc kubenswrapper[4687]: I0312 16:35:44.122267 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:35:44 crc kubenswrapper[4687]: I0312 16:35:44.122382 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:35:56 crc kubenswrapper[4687]: I0312 16:35:56.320205 4687 generic.go:334] "Generic (PLEG): container finished" podID="c2a0d336-cce5-4c55-a8c7-7d018a0131ed" containerID="29c786becf0ae0306b2f6ef00a3c8dfadab91dd1c6566ed7d7981e4df62b9ed6" exitCode=0 Mar 12 16:35:56 crc kubenswrapper[4687]: I0312 16:35:56.320484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" event={"ID":"c2a0d336-cce5-4c55-a8c7-7d018a0131ed","Type":"ContainerDied","Data":"29c786becf0ae0306b2f6ef00a3c8dfadab91dd1c6566ed7d7981e4df62b9ed6"} Mar 12 16:35:57 crc kubenswrapper[4687]: I0312 16:35:57.820384 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:35:57 crc kubenswrapper[4687]: I0312 16:35:57.953517 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-ssh-key-openstack-edpm-ipam\") pod \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " Mar 12 16:35:57 crc kubenswrapper[4687]: I0312 16:35:57.953731 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-inventory\") pod \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " Mar 12 16:35:57 crc kubenswrapper[4687]: I0312 16:35:57.953793 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bwqw\" (UniqueName: \"kubernetes.io/projected/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-kube-api-access-5bwqw\") pod \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\" (UID: \"c2a0d336-cce5-4c55-a8c7-7d018a0131ed\") " Mar 12 16:35:57 crc kubenswrapper[4687]: I0312 16:35:57.959516 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-kube-api-access-5bwqw" (OuterVolumeSpecName: "kube-api-access-5bwqw") pod "c2a0d336-cce5-4c55-a8c7-7d018a0131ed" (UID: "c2a0d336-cce5-4c55-a8c7-7d018a0131ed"). InnerVolumeSpecName "kube-api-access-5bwqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:35:57 crc kubenswrapper[4687]: I0312 16:35:57.992794 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2a0d336-cce5-4c55-a8c7-7d018a0131ed" (UID: "c2a0d336-cce5-4c55-a8c7-7d018a0131ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.005571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-inventory" (OuterVolumeSpecName: "inventory") pod "c2a0d336-cce5-4c55-a8c7-7d018a0131ed" (UID: "c2a0d336-cce5-4c55-a8c7-7d018a0131ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.057089 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.057312 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.057398 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bwqw\" (UniqueName: \"kubernetes.io/projected/c2a0d336-cce5-4c55-a8c7-7d018a0131ed-kube-api-access-5bwqw\") on node \"crc\" DevicePath \"\"" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.342212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" event={"ID":"c2a0d336-cce5-4c55-a8c7-7d018a0131ed","Type":"ContainerDied","Data":"7a9d7b48bf17187fbe4f4b95f8b73194c92e1f0f20cb40202d6af5eb9f920046"} Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.342261 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a9d7b48bf17187fbe4f4b95f8b73194c92e1f0f20cb40202d6af5eb9f920046" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.342323 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xfqms" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.423068 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh"] Mar 12 16:35:58 crc kubenswrapper[4687]: E0312 16:35:58.424349 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a0d336-cce5-4c55-a8c7-7d018a0131ed" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.424411 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a0d336-cce5-4c55-a8c7-7d018a0131ed" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.424827 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a0d336-cce5-4c55-a8c7-7d018a0131ed" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.426210 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.428440 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.428690 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.428738 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.429534 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.435284 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh"] Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.472590 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkk4\" (UniqueName: \"kubernetes.io/projected/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-kube-api-access-8tkk4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.472677 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.472905 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.575828 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tkk4\" (UniqueName: \"kubernetes.io/projected/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-kube-api-access-8tkk4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.575907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.576041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.579698 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.579757 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.597876 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tkk4\" (UniqueName: \"kubernetes.io/projected/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-kube-api-access-8tkk4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-stndh\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:58 crc kubenswrapper[4687]: I0312 16:35:58.753527 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:35:59 crc kubenswrapper[4687]: I0312 16:35:59.350660 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh"] Mar 12 16:35:59 crc kubenswrapper[4687]: I0312 16:35:59.358869 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.137685 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555556-vlwk9"] Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.139770 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.142557 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.142692 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.142864 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.170182 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555556-vlwk9"] Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.213392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m52v\" (UniqueName: \"kubernetes.io/projected/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62-kube-api-access-6m52v\") pod \"auto-csr-approver-29555556-vlwk9\" (UID: \"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62\") " pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.315465 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m52v\" (UniqueName: \"kubernetes.io/projected/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62-kube-api-access-6m52v\") pod \"auto-csr-approver-29555556-vlwk9\" (UID: \"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62\") " pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.344312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m52v\" (UniqueName: \"kubernetes.io/projected/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62-kube-api-access-6m52v\") pod \"auto-csr-approver-29555556-vlwk9\" (UID: \"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62\") " pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.363442 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" event={"ID":"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe","Type":"ContainerStarted","Data":"8e8de6b501427da7d760e6351b0918683b864e2bccb298c3cc8b5fedba1e3417"} Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.363481 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" event={"ID":"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe","Type":"ContainerStarted","Data":"b72bcd4d7668eb4e6ef1336f5419132a8756ae5b43bb4615ca145f59793fb356"} Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.380564 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" podStartSLOduration=1.906647473 podStartE2EDuration="2.38054485s" podCreationTimestamp="2026-03-12 16:35:58 +0000 UTC" firstStartedPulling="2026-03-12 16:35:59.358634018 +0000 UTC m=+2008.322596362" lastFinishedPulling="2026-03-12 16:35:59.832531395 +0000 UTC m=+2008.796493739" observedRunningTime="2026-03-12 16:36:00.378998338 +0000 UTC m=+2009.342960682" watchObservedRunningTime="2026-03-12 16:36:00.38054485 +0000 UTC m=+2009.344507194" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.457998 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:00 crc kubenswrapper[4687]: I0312 16:36:00.920144 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555556-vlwk9"] Mar 12 16:36:01 crc kubenswrapper[4687]: I0312 16:36:01.373646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" event={"ID":"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62","Type":"ContainerStarted","Data":"367fd885f541d770c78b86777fddcf2b83f80fff432a988bfd1e68b355f77397"} Mar 12 16:36:02 crc kubenswrapper[4687]: I0312 16:36:02.387532 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" event={"ID":"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62","Type":"ContainerStarted","Data":"4ae82c930f5c0da069f67d382772ec4743ccccccfcd6578d7778634022986ac4"} Mar 12 16:36:02 crc kubenswrapper[4687]: I0312 16:36:02.409230 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" podStartSLOduration=1.267702772 podStartE2EDuration="2.409213198s" podCreationTimestamp="2026-03-12 16:36:00 +0000 UTC" firstStartedPulling="2026-03-12 16:36:00.926083578 +0000 UTC m=+2009.890045922" lastFinishedPulling="2026-03-12 16:36:02.067594004 +0000 UTC m=+2011.031556348" observedRunningTime="2026-03-12 16:36:02.404601532 +0000 UTC m=+2011.368563876" watchObservedRunningTime="2026-03-12 16:36:02.409213198 +0000 UTC m=+2011.373175542" Mar 12 16:36:03 crc kubenswrapper[4687]: I0312 16:36:03.399817 4687 generic.go:334] "Generic (PLEG): container finished" podID="b8ff2b76-6e20-4908-81f5-7fa1c10a1b62" containerID="4ae82c930f5c0da069f67d382772ec4743ccccccfcd6578d7778634022986ac4" exitCode=0 Mar 12 16:36:03 crc kubenswrapper[4687]: I0312 16:36:03.400059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" event={"ID":"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62","Type":"ContainerDied","Data":"4ae82c930f5c0da069f67d382772ec4743ccccccfcd6578d7778634022986ac4"} Mar 12 16:36:04 crc kubenswrapper[4687]: I0312 16:36:04.857489 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.028307 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m52v\" (UniqueName: \"kubernetes.io/projected/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62-kube-api-access-6m52v\") pod \"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62\" (UID: \"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62\") " Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.034201 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62-kube-api-access-6m52v" (OuterVolumeSpecName: "kube-api-access-6m52v") pod "b8ff2b76-6e20-4908-81f5-7fa1c10a1b62" (UID: "b8ff2b76-6e20-4908-81f5-7fa1c10a1b62"). InnerVolumeSpecName "kube-api-access-6m52v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.130926 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m52v\" (UniqueName: \"kubernetes.io/projected/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62-kube-api-access-6m52v\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.420991 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" event={"ID":"b8ff2b76-6e20-4908-81f5-7fa1c10a1b62","Type":"ContainerDied","Data":"367fd885f541d770c78b86777fddcf2b83f80fff432a988bfd1e68b355f77397"} Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.421024 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555556-vlwk9" Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.421041 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367fd885f541d770c78b86777fddcf2b83f80fff432a988bfd1e68b355f77397" Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.422819 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" containerID="8e8de6b501427da7d760e6351b0918683b864e2bccb298c3cc8b5fedba1e3417" exitCode=0 Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.422851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" event={"ID":"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe","Type":"ContainerDied","Data":"8e8de6b501427da7d760e6351b0918683b864e2bccb298c3cc8b5fedba1e3417"} Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.946133 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555550-hbp7f"] Mar 12 16:36:05 crc kubenswrapper[4687]: I0312 16:36:05.958714 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555550-hbp7f"] Mar 12 16:36:06 crc kubenswrapper[4687]: I0312 16:36:06.905024 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:36:06 crc kubenswrapper[4687]: I0312 16:36:06.969930 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-inventory\") pod \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " Mar 12 16:36:06 crc kubenswrapper[4687]: I0312 16:36:06.970240 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tkk4\" (UniqueName: \"kubernetes.io/projected/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-kube-api-access-8tkk4\") pod \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " Mar 12 16:36:06 crc kubenswrapper[4687]: I0312 16:36:06.970271 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-ssh-key-openstack-edpm-ipam\") pod \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\" (UID: \"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe\") " Mar 12 16:36:06 crc kubenswrapper[4687]: I0312 16:36:06.975512 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-kube-api-access-8tkk4" (OuterVolumeSpecName: "kube-api-access-8tkk4") pod "6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" (UID: "6c07bd33-4fe2-4ac1-8493-0fc93f9698fe"). InnerVolumeSpecName "kube-api-access-8tkk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.003233 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" (UID: "6c07bd33-4fe2-4ac1-8493-0fc93f9698fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.015599 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-inventory" (OuterVolumeSpecName: "inventory") pod "6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" (UID: "6c07bd33-4fe2-4ac1-8493-0fc93f9698fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.073202 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tkk4\" (UniqueName: \"kubernetes.io/projected/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-kube-api-access-8tkk4\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.073235 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.073247 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c07bd33-4fe2-4ac1-8493-0fc93f9698fe-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.444112 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" event={"ID":"6c07bd33-4fe2-4ac1-8493-0fc93f9698fe","Type":"ContainerDied","Data":"b72bcd4d7668eb4e6ef1336f5419132a8756ae5b43bb4615ca145f59793fb356"} Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.444164 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72bcd4d7668eb4e6ef1336f5419132a8756ae5b43bb4615ca145f59793fb356" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.444227 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-stndh" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.525435 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8"] Mar 12 16:36:07 crc kubenswrapper[4687]: E0312 16:36:07.526353 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ff2b76-6e20-4908-81f5-7fa1c10a1b62" containerName="oc" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.526465 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ff2b76-6e20-4908-81f5-7fa1c10a1b62" containerName="oc" Mar 12 16:36:07 crc kubenswrapper[4687]: E0312 16:36:07.526573 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.526657 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.527045 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ff2b76-6e20-4908-81f5-7fa1c10a1b62" containerName="oc" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.527465 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c07bd33-4fe2-4ac1-8493-0fc93f9698fe" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.528592 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.531582 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.531829 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.532111 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.532217 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.541349 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8"] Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.686786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dzn\" (UniqueName: \"kubernetes.io/projected/c4aa11c4-93fb-446b-ac1b-c62279d040dd-kube-api-access-k6dzn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.687193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.687278 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.746179 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9a447d-54f6-42cc-bd25-42894645e1cf" path="/var/lib/kubelet/pods/0b9a447d-54f6-42cc-bd25-42894645e1cf/volumes" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.790617 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6dzn\" (UniqueName: \"kubernetes.io/projected/c4aa11c4-93fb-446b-ac1b-c62279d040dd-kube-api-access-k6dzn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.790869 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.790944 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.794591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.805773 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.806535 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6dzn\" (UniqueName: \"kubernetes.io/projected/c4aa11c4-93fb-446b-ac1b-c62279d040dd-kube-api-access-k6dzn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsdp8\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:07 crc kubenswrapper[4687]: I0312 16:36:07.854091 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:08 crc kubenswrapper[4687]: I0312 16:36:08.413073 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8"] Mar 12 16:36:08 crc kubenswrapper[4687]: I0312 16:36:08.458531 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" event={"ID":"c4aa11c4-93fb-446b-ac1b-c62279d040dd","Type":"ContainerStarted","Data":"e885376eabfb09f0e259fa88716f9fca38fd58024582588a07c85adb74fe852c"} Mar 12 16:36:09 crc kubenswrapper[4687]: I0312 16:36:09.470052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" event={"ID":"c4aa11c4-93fb-446b-ac1b-c62279d040dd","Type":"ContainerStarted","Data":"cedcf0738d6e58bdb42d5918a2e55fd9435dec67a1813c11edfa0b5e92cda63c"} Mar 12 16:36:09 crc kubenswrapper[4687]: I0312 16:36:09.505898 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" podStartSLOduration=2.0865008720000002 podStartE2EDuration="2.505877475s" podCreationTimestamp="2026-03-12 16:36:07 +0000 UTC" firstStartedPulling="2026-03-12 16:36:08.408611671 +0000 UTC m=+2017.372574035" lastFinishedPulling="2026-03-12 16:36:08.827988284 +0000 UTC m=+2017.791950638" observedRunningTime="2026-03-12 16:36:09.489873227 +0000 UTC m=+2018.453835581" watchObservedRunningTime="2026-03-12 16:36:09.505877475 +0000 UTC m=+2018.469839829" Mar 12 16:36:14 crc kubenswrapper[4687]: I0312 16:36:14.121736 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:36:14 crc kubenswrapper[4687]: I0312 16:36:14.122163 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:36:24 crc kubenswrapper[4687]: I0312 16:36:24.178925 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podUID="0bbd130e-9a81-466f-8d89-79c2fa5fdc4c" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 12 16:36:27 crc kubenswrapper[4687]: I0312 16:36:27.034175 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1bf9-account-create-update-zk5jd"] Mar 12 16:36:27 crc kubenswrapper[4687]: I0312 16:36:27.045316 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1bf9-account-create-update-zk5jd"] Mar 12 16:36:27 crc kubenswrapper[4687]: I0312 16:36:27.746633 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8959ad5d-c828-4dcf-993f-4225e02fa8ff" path="/var/lib/kubelet/pods/8959ad5d-c828-4dcf-993f-4225e02fa8ff/volumes" Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.045490 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9891-account-create-update-lmtdz"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.057372 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dbdrp"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.073649 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6qfnd"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.090947 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-13ba-account-create-update-mjkr5"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.103419 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9891-account-create-update-lmtdz"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.117666 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-13ba-account-create-update-mjkr5"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.127524 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dbdrp"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.137930 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6qfnd"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.149333 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6627h"] Mar 12 16:36:28 crc kubenswrapper[4687]: I0312 16:36:28.158807 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6627h"] Mar 12 16:36:29 crc kubenswrapper[4687]: I0312 16:36:29.747043 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3221bf1a-072f-43e5-add9-4b21a6145692" path="/var/lib/kubelet/pods/3221bf1a-072f-43e5-add9-4b21a6145692/volumes" Mar 12 16:36:29 crc kubenswrapper[4687]: I0312 16:36:29.747748 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e2c288-c849-482c-97e9-671307534961" path="/var/lib/kubelet/pods/53e2c288-c849-482c-97e9-671307534961/volumes" Mar 12 16:36:29 crc kubenswrapper[4687]: I0312 16:36:29.748382 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652d0f48-b79d-4be2-a971-735297c4c3d6" path="/var/lib/kubelet/pods/652d0f48-b79d-4be2-a971-735297c4c3d6/volumes" Mar 12 16:36:29 crc kubenswrapper[4687]: I0312 16:36:29.749049 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831bdc9a-ebd0-4559-8f76-4f590cd3ea4f" path="/var/lib/kubelet/pods/831bdc9a-ebd0-4559-8f76-4f590cd3ea4f/volumes" Mar 12 16:36:29 crc kubenswrapper[4687]: I0312 16:36:29.750172 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b16131f-3ac7-4721-ae74-8f7c937a3fec" path="/var/lib/kubelet/pods/9b16131f-3ac7-4721-ae74-8f7c937a3fec/volumes" Mar 12 16:36:31 crc kubenswrapper[4687]: I0312 16:36:31.901188 4687 scope.go:117] "RemoveContainer" containerID="9cfa4e468e0d228171adcb1e558c0ea63581a15fd4e03271378cc078cf17ea4b" Mar 12 16:36:31 crc kubenswrapper[4687]: I0312 16:36:31.949716 4687 scope.go:117] "RemoveContainer" containerID="5171825249e60cd0f9015a61306f07503bc14f81e8e87a42d0450bca948ecda8" Mar 12 16:36:31 crc kubenswrapper[4687]: I0312 16:36:31.986905 4687 scope.go:117] "RemoveContainer" containerID="ddb31459b6232ed9ad823e0e0cd6f522b682c3cf0126c06fbfc7d49a96ea46c6" Mar 12 16:36:32 crc kubenswrapper[4687]: I0312 16:36:32.039278 4687 scope.go:117] "RemoveContainer" containerID="38804c602783559eacfc856aabc79a853c073183cd238006610cc1b433bfe2c8" Mar 12 16:36:32 crc kubenswrapper[4687]: I0312 16:36:32.107458 4687 scope.go:117] "RemoveContainer" containerID="fd4b5e1d581c40097a4911554709176928a0d1bd5434eda4deef8bf5432569c5" Mar 12 16:36:32 crc kubenswrapper[4687]: I0312 16:36:32.157238 4687 scope.go:117] "RemoveContainer" containerID="bcd2c8a87e1a0716685610d7746b9d5dd976c22c79d808f674932b8e07c4b8b1" Mar 12 16:36:32 crc kubenswrapper[4687]: I0312 16:36:32.205585 4687 scope.go:117] "RemoveContainer" containerID="085403b99998b377fc055c062b9016db054f24d6ce2a5cb5bf108a44a2901ffc" Mar 12 16:36:32 crc kubenswrapper[4687]: I0312 16:36:32.231319 4687 scope.go:117] "RemoveContainer" containerID="ef0108e98f9f57ee40f4ecfd3725dcf4476c4e7cc9f2f63277ddeb75ca67011e" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.145110 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.148254 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.162585 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.234328 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-utilities\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.234407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvtx\" (UniqueName: \"kubernetes.io/projected/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-kube-api-access-wxvtx\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.234701 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-catalog-content\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.337241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-catalog-content\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.337447 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-utilities\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.337478 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxvtx\" (UniqueName: \"kubernetes.io/projected/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-kube-api-access-wxvtx\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.337782 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-catalog-content\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.337792 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-utilities\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.356874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxvtx\" (UniqueName: \"kubernetes.io/projected/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-kube-api-access-wxvtx\") pod \"redhat-operators-z9b47\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:37 crc kubenswrapper[4687]: I0312 16:36:37.485829 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:38 crc kubenswrapper[4687]: I0312 16:36:38.006476 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:36:38 crc kubenswrapper[4687]: I0312 16:36:38.818867 4687 generic.go:334] "Generic (PLEG): container finished" podID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerID="2e4d56c45d377e2deae276aea6b88ede42474dfd391e05fe9ea3c543b8a21a52" exitCode=0 Mar 12 16:36:38 crc kubenswrapper[4687]: I0312 16:36:38.819133 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerDied","Data":"2e4d56c45d377e2deae276aea6b88ede42474dfd391e05fe9ea3c543b8a21a52"} Mar 12 16:36:38 crc kubenswrapper[4687]: I0312 16:36:38.819163 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerStarted","Data":"88e28dbc7ae88fd473836fe4fee7f8af6568c939ae77e919516ba1abea11c8d4"} Mar 12 16:36:43 crc kubenswrapper[4687]: I0312 16:36:43.875809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" event={"ID":"c4aa11c4-93fb-446b-ac1b-c62279d040dd","Type":"ContainerDied","Data":"cedcf0738d6e58bdb42d5918a2e55fd9435dec67a1813c11edfa0b5e92cda63c"} Mar 12 16:36:43 crc kubenswrapper[4687]: I0312 16:36:43.875608 4687 generic.go:334] "Generic (PLEG): container finished" podID="c4aa11c4-93fb-446b-ac1b-c62279d040dd" containerID="cedcf0738d6e58bdb42d5918a2e55fd9435dec67a1813c11edfa0b5e92cda63c" exitCode=0 Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.121601 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.121661 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.121705 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.122682 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9dc6392cb7dbf829f54eade1ebdbf1980615c87a7f18a1a2fbd8d0c0a711640"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.122756 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://d9dc6392cb7dbf829f54eade1ebdbf1980615c87a7f18a1a2fbd8d0c0a711640" gracePeriod=600 Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.899776 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="d9dc6392cb7dbf829f54eade1ebdbf1980615c87a7f18a1a2fbd8d0c0a711640" exitCode=0 Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.900352 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"d9dc6392cb7dbf829f54eade1ebdbf1980615c87a7f18a1a2fbd8d0c0a711640"} Mar 12 16:36:44 crc kubenswrapper[4687]: I0312 16:36:44.900441 4687 scope.go:117] "RemoveContainer" containerID="35091a9764607e3cd2935740b896b177cf31b73dd89241b04b61e7ddb0e477e4" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.668684 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.832513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-inventory\") pod \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.832624 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6dzn\" (UniqueName: \"kubernetes.io/projected/c4aa11c4-93fb-446b-ac1b-c62279d040dd-kube-api-access-k6dzn\") pod \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.832721 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-ssh-key-openstack-edpm-ipam\") pod \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\" (UID: \"c4aa11c4-93fb-446b-ac1b-c62279d040dd\") " Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.841269 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4aa11c4-93fb-446b-ac1b-c62279d040dd-kube-api-access-k6dzn" (OuterVolumeSpecName: "kube-api-access-k6dzn") pod "c4aa11c4-93fb-446b-ac1b-c62279d040dd" (UID: "c4aa11c4-93fb-446b-ac1b-c62279d040dd"). InnerVolumeSpecName "kube-api-access-k6dzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.899506 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4aa11c4-93fb-446b-ac1b-c62279d040dd" (UID: "c4aa11c4-93fb-446b-ac1b-c62279d040dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.917570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-inventory" (OuterVolumeSpecName: "inventory") pod "c4aa11c4-93fb-446b-ac1b-c62279d040dd" (UID: "c4aa11c4-93fb-446b-ac1b-c62279d040dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.936564 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.936814 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6dzn\" (UniqueName: \"kubernetes.io/projected/c4aa11c4-93fb-446b-ac1b-c62279d040dd-kube-api-access-k6dzn\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.936881 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4aa11c4-93fb-446b-ac1b-c62279d040dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.953709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" event={"ID":"c4aa11c4-93fb-446b-ac1b-c62279d040dd","Type":"ContainerDied","Data":"e885376eabfb09f0e259fa88716f9fca38fd58024582588a07c85adb74fe852c"} Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.953947 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e885376eabfb09f0e259fa88716f9fca38fd58024582588a07c85adb74fe852c" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.954071 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsdp8" Mar 12 16:36:47 crc kubenswrapper[4687]: I0312 16:36:47.974752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerStarted","Data":"b9470fb53e6297cdfadda033e5562c842fdb9bcf8f75fcfd56b13b610889e485"} Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.010809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a"} Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.771721 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb"] Mar 12 16:36:48 crc kubenswrapper[4687]: E0312 16:36:48.772247 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aa11c4-93fb-446b-ac1b-c62279d040dd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.772260 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aa11c4-93fb-446b-ac1b-c62279d040dd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.772494 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4aa11c4-93fb-446b-ac1b-c62279d040dd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.773249 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.777120 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.777352 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.777396 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.777641 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.784344 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb"] Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.972043 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82czv\" (UniqueName: \"kubernetes.io/projected/754c5904-0fe4-408c-bf65-439e420218f8-kube-api-access-82czv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.972525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:48 crc kubenswrapper[4687]: I0312 16:36:48.972583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.074456 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82czv\" (UniqueName: \"kubernetes.io/projected/754c5904-0fe4-408c-bf65-439e420218f8-kube-api-access-82czv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.074683 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.074729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.089915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.090928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.092243 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82czv\" (UniqueName: \"kubernetes.io/projected/754c5904-0fe4-408c-bf65-439e420218f8-kube-api-access-82czv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.129349 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:36:49 crc kubenswrapper[4687]: W0312 16:36:49.703340 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754c5904_0fe4_408c_bf65_439e420218f8.slice/crio-365b69df1864fe7084db5ef7fa26c73d47f87e8eb0e81e4387cb2649b8db0cd3 WatchSource:0}: Error finding container 365b69df1864fe7084db5ef7fa26c73d47f87e8eb0e81e4387cb2649b8db0cd3: Status 404 returned error can't find the container with id 365b69df1864fe7084db5ef7fa26c73d47f87e8eb0e81e4387cb2649b8db0cd3 Mar 12 16:36:49 crc kubenswrapper[4687]: I0312 16:36:49.704930 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb"] Mar 12 16:36:50 crc kubenswrapper[4687]: I0312 16:36:50.032972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" event={"ID":"754c5904-0fe4-408c-bf65-439e420218f8","Type":"ContainerStarted","Data":"365b69df1864fe7084db5ef7fa26c73d47f87e8eb0e81e4387cb2649b8db0cd3"} Mar 12 16:36:51 crc kubenswrapper[4687]: I0312 16:36:51.046501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" event={"ID":"754c5904-0fe4-408c-bf65-439e420218f8","Type":"ContainerStarted","Data":"0a45f618c83cc0e12b0094e6c283e1a2c4914d18d23c04820ef142267ab78b79"} Mar 12 16:36:51 crc kubenswrapper[4687]: I0312 16:36:51.081276 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" podStartSLOduration=2.649092995 podStartE2EDuration="3.081255449s" podCreationTimestamp="2026-03-12 16:36:48 +0000 UTC" firstStartedPulling="2026-03-12 16:36:49.706214998 +0000 UTC m=+2058.670177362" lastFinishedPulling="2026-03-12 16:36:50.138377482 +0000 UTC m=+2059.102339816" observedRunningTime="2026-03-12 16:36:51.067587244 +0000 UTC m=+2060.031549618" watchObservedRunningTime="2026-03-12 16:36:51.081255449 +0000 UTC m=+2060.045217803" Mar 12 16:36:53 crc kubenswrapper[4687]: I0312 16:36:53.075681 4687 generic.go:334] "Generic (PLEG): container finished" podID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerID="b9470fb53e6297cdfadda033e5562c842fdb9bcf8f75fcfd56b13b610889e485" exitCode=0 Mar 12 16:36:53 crc kubenswrapper[4687]: I0312 16:36:53.076106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerDied","Data":"b9470fb53e6297cdfadda033e5562c842fdb9bcf8f75fcfd56b13b610889e485"} Mar 12 16:36:54 crc kubenswrapper[4687]: I0312 16:36:54.122218 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerStarted","Data":"df5c4edfd19adb4ba5e38fa774d6eb5ba06f70c2fc6381ccf8c0f9823f0c7f10"} Mar 12 16:36:54 crc kubenswrapper[4687]: I0312 16:36:54.161897 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9b47" podStartSLOduration=2.456561506 podStartE2EDuration="17.161871329s" podCreationTimestamp="2026-03-12 16:36:37 +0000 UTC" firstStartedPulling="2026-03-12 16:36:38.821230341 +0000 UTC m=+2047.785192705" lastFinishedPulling="2026-03-12 16:36:53.526540174 +0000 UTC m=+2062.490502528" observedRunningTime="2026-03-12 16:36:54.13815778 +0000 UTC m=+2063.102120124" watchObservedRunningTime="2026-03-12 16:36:54.161871329 +0000 UTC m=+2063.125833683" Mar 12 16:36:57 crc kubenswrapper[4687]: I0312 16:36:57.486742 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:57 crc kubenswrapper[4687]: I0312 16:36:57.487232 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:36:58 crc kubenswrapper[4687]: I0312 16:36:58.556334 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9b47" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="registry-server" probeResult="failure" output=< Mar 12 16:36:58 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:36:58 crc kubenswrapper[4687]: > Mar 12 16:36:59 crc kubenswrapper[4687]: I0312 16:36:59.054050 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m57l8"] Mar 12 16:36:59 crc kubenswrapper[4687]: I0312 16:36:59.064519 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m57l8"] Mar 12 16:36:59 crc kubenswrapper[4687]: I0312 16:36:59.745883 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa080a8e-b5e2-43f7-8d94-ea07058748b6" path="/var/lib/kubelet/pods/aa080a8e-b5e2-43f7-8d94-ea07058748b6/volumes" Mar 12 16:37:08 crc kubenswrapper[4687]: I0312 16:37:08.539528 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9b47" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="registry-server" probeResult="failure" output=< Mar 12 16:37:08 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:37:08 crc kubenswrapper[4687]: > Mar 12 16:37:17 crc kubenswrapper[4687]: I0312 16:37:17.545056 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:37:17 crc kubenswrapper[4687]: I0312 16:37:17.594430 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:37:17 crc kubenswrapper[4687]: I0312 16:37:17.688136 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:37:17 crc kubenswrapper[4687]: I0312 16:37:17.796397 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22nvv"] Mar 12 16:37:17 crc kubenswrapper[4687]: I0312 16:37:17.797155 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-22nvv" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="registry-server" containerID="cri-o://db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93" gracePeriod=2 Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.064121 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-9pzgb"] Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.093553 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-d8b1-account-create-update-7jbm7"] Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.101696 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-9pzgb"] Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.111289 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-d8b1-account-create-update-7jbm7"] Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.483984 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.634501 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerID="db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93" exitCode=0 Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.635492 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-22nvv" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.635812 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22nvv" event={"ID":"6c4b07aa-c412-4479-960e-1f79f4e68d96","Type":"ContainerDied","Data":"db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93"} Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.635845 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-22nvv" event={"ID":"6c4b07aa-c412-4479-960e-1f79f4e68d96","Type":"ContainerDied","Data":"e10c551c83fb6e3e38c69577d9af8929cad945afd4988ed20daee6b23492371d"} Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.635862 4687 scope.go:117] "RemoveContainer" containerID="db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.656369 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-catalog-content\") pod \"6c4b07aa-c412-4479-960e-1f79f4e68d96\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.656496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw2xx\" (UniqueName: \"kubernetes.io/projected/6c4b07aa-c412-4479-960e-1f79f4e68d96-kube-api-access-kw2xx\") pod \"6c4b07aa-c412-4479-960e-1f79f4e68d96\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.656529 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-utilities\") pod \"6c4b07aa-c412-4479-960e-1f79f4e68d96\" (UID: \"6c4b07aa-c412-4479-960e-1f79f4e68d96\") " Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.657935 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-utilities" (OuterVolumeSpecName: "utilities") pod "6c4b07aa-c412-4479-960e-1f79f4e68d96" (UID: "6c4b07aa-c412-4479-960e-1f79f4e68d96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.662602 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4b07aa-c412-4479-960e-1f79f4e68d96-kube-api-access-kw2xx" (OuterVolumeSpecName: "kube-api-access-kw2xx") pod "6c4b07aa-c412-4479-960e-1f79f4e68d96" (UID: "6c4b07aa-c412-4479-960e-1f79f4e68d96"). InnerVolumeSpecName "kube-api-access-kw2xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.668887 4687 scope.go:117] "RemoveContainer" containerID="8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.752712 4687 scope.go:117] "RemoveContainer" containerID="acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.759649 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw2xx\" (UniqueName: \"kubernetes.io/projected/6c4b07aa-c412-4479-960e-1f79f4e68d96-kube-api-access-kw2xx\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.759684 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.807687 4687 scope.go:117] "RemoveContainer" containerID="db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.811126 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c4b07aa-c412-4479-960e-1f79f4e68d96" (UID: "6c4b07aa-c412-4479-960e-1f79f4e68d96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:37:18 crc kubenswrapper[4687]: E0312 16:37:18.813974 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93\": container with ID starting with db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93 not found: ID does not exist" containerID="db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.814154 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93"} err="failed to get container status \"db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93\": rpc error: code = NotFound desc = could not find container \"db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93\": container with ID starting with db902f71069dc3b82bcd7d5cde0d1935029435f12371073877336813b725da93 not found: ID does not exist" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.814257 4687 scope.go:117] "RemoveContainer" containerID="8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac" Mar 12 16:37:18 crc kubenswrapper[4687]: E0312 16:37:18.814707 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac\": container with ID starting with 8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac not found: ID does not exist" containerID="8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.814832 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac"} err="failed to get container status \"8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac\": rpc error: code = NotFound desc = could not find container \"8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac\": container with ID starting with 8086086d589281998cb198aa79cc9a411c7aec833483a5a67761e3ccf92f5bac not found: ID does not exist" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.814934 4687 scope.go:117] "RemoveContainer" containerID="acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da" Mar 12 16:37:18 crc kubenswrapper[4687]: E0312 16:37:18.815379 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da\": container with ID starting with acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da not found: ID does not exist" containerID="acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.815479 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da"} err="failed to get container status \"acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da\": rpc error: code = NotFound desc = could not find container \"acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da\": container with ID starting with acefae5870a37c6fa84977e389a97fef381c8d40590ec2dfcafdf8ed74f434da not found: ID does not exist" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.862371 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c4b07aa-c412-4479-960e-1f79f4e68d96-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:18 crc kubenswrapper[4687]: I0312 16:37:18.999742 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-22nvv"] Mar 12 16:37:19 crc kubenswrapper[4687]: I0312 16:37:19.010108 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-22nvv"] Mar 12 16:37:19 crc kubenswrapper[4687]: I0312 16:37:19.776987 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" path="/var/lib/kubelet/pods/6c4b07aa-c412-4479-960e-1f79f4e68d96/volumes" Mar 12 16:37:19 crc kubenswrapper[4687]: I0312 16:37:19.779983 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a21ed79-f865-4e4e-9794-2091bb9565c1" path="/var/lib/kubelet/pods/7a21ed79-f865-4e4e-9794-2091bb9565c1/volumes" Mar 12 16:37:19 crc kubenswrapper[4687]: I0312 16:37:19.780920 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2d3452-ab24-443c-addf-92e8f6ccc55e" path="/var/lib/kubelet/pods/8b2d3452-ab24-443c-addf-92e8f6ccc55e/volumes" Mar 12 16:37:27 crc kubenswrapper[4687]: I0312 16:37:27.032568 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-r62f9"] Mar 12 16:37:27 crc kubenswrapper[4687]: I0312 16:37:27.041650 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-r62f9"] Mar 12 16:37:27 crc kubenswrapper[4687]: I0312 16:37:27.744698 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b5d6f6-b886-4992-a758-0f4c72a872bf" path="/var/lib/kubelet/pods/07b5d6f6-b886-4992-a758-0f4c72a872bf/volumes" Mar 12 16:37:28 crc kubenswrapper[4687]: I0312 16:37:28.057745 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wnhb5"] Mar 12 16:37:28 crc kubenswrapper[4687]: I0312 16:37:28.073417 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wnhb5"] Mar 12 16:37:29 crc kubenswrapper[4687]: I0312 16:37:29.748789 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d938a4a-a974-450c-b299-2e0e704d0da1" path="/var/lib/kubelet/pods/4d938a4a-a974-450c-b299-2e0e704d0da1/volumes" Mar 12 16:37:32 crc kubenswrapper[4687]: I0312 16:37:32.453710 4687 scope.go:117] "RemoveContainer" containerID="60ec0d8b163501a95d33c7156b7f11d7b75e1dc0c8221bf3ff26af335882207b" Mar 12 16:37:32 crc kubenswrapper[4687]: I0312 16:37:32.481196 4687 scope.go:117] "RemoveContainer" containerID="ec3b2dee10f12a712be8aea09f50352fb8204e230cee7434704f549cd6477aba" Mar 12 16:37:32 crc kubenswrapper[4687]: I0312 16:37:32.536087 4687 scope.go:117] "RemoveContainer" containerID="f0fcbeb2a84d714f1355040dda6e44a58d0236aa9e7891a9562c8a804021ea19" Mar 12 16:37:32 crc kubenswrapper[4687]: I0312 16:37:32.583662 4687 scope.go:117] "RemoveContainer" containerID="7a8e416b7bebb68d9da01b9f17f5a04b574fb03f44424745e4c1bde126eff216" Mar 12 16:37:32 crc kubenswrapper[4687]: I0312 16:37:32.651165 4687 scope.go:117] "RemoveContainer" containerID="5ef7dd16879245f379b914e7a034d8b5ecc587e1346e7681fe13657401e21bce" Mar 12 16:37:38 crc kubenswrapper[4687]: I0312 16:37:38.872948 4687 generic.go:334] "Generic (PLEG): container finished" podID="754c5904-0fe4-408c-bf65-439e420218f8" containerID="0a45f618c83cc0e12b0094e6c283e1a2c4914d18d23c04820ef142267ab78b79" exitCode=0 Mar 12 16:37:38 crc kubenswrapper[4687]: I0312 16:37:38.873023 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" event={"ID":"754c5904-0fe4-408c-bf65-439e420218f8","Type":"ContainerDied","Data":"0a45f618c83cc0e12b0094e6c283e1a2c4914d18d23c04820ef142267ab78b79"} Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.378315 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.496481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-inventory\") pod \"754c5904-0fe4-408c-bf65-439e420218f8\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.496579 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82czv\" (UniqueName: \"kubernetes.io/projected/754c5904-0fe4-408c-bf65-439e420218f8-kube-api-access-82czv\") pod \"754c5904-0fe4-408c-bf65-439e420218f8\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.496898 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam\") pod \"754c5904-0fe4-408c-bf65-439e420218f8\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.502473 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754c5904-0fe4-408c-bf65-439e420218f8-kube-api-access-82czv" (OuterVolumeSpecName: "kube-api-access-82czv") pod "754c5904-0fe4-408c-bf65-439e420218f8" (UID: "754c5904-0fe4-408c-bf65-439e420218f8"). InnerVolumeSpecName "kube-api-access-82czv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:37:40 crc kubenswrapper[4687]: E0312 16:37:40.527021 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam podName:754c5904-0fe4-408c-bf65-439e420218f8 nodeName:}" failed. No retries permitted until 2026-03-12 16:37:41.026406555 +0000 UTC m=+2109.990368899 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam") pod "754c5904-0fe4-408c-bf65-439e420218f8" (UID: "754c5904-0fe4-408c-bf65-439e420218f8") : error deleting /var/lib/kubelet/pods/754c5904-0fe4-408c-bf65-439e420218f8/volume-subpaths: remove /var/lib/kubelet/pods/754c5904-0fe4-408c-bf65-439e420218f8/volume-subpaths: no such file or directory Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.529217 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-inventory" (OuterVolumeSpecName: "inventory") pod "754c5904-0fe4-408c-bf65-439e420218f8" (UID: "754c5904-0fe4-408c-bf65-439e420218f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.600268 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.600298 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82czv\" (UniqueName: \"kubernetes.io/projected/754c5904-0fe4-408c-bf65-439e420218f8-kube-api-access-82czv\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.896700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" event={"ID":"754c5904-0fe4-408c-bf65-439e420218f8","Type":"ContainerDied","Data":"365b69df1864fe7084db5ef7fa26c73d47f87e8eb0e81e4387cb2649b8db0cd3"} Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.896742 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="365b69df1864fe7084db5ef7fa26c73d47f87e8eb0e81e4387cb2649b8db0cd3" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.896804 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.979731 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhxs9"] Mar 12 16:37:40 crc kubenswrapper[4687]: E0312 16:37:40.980206 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754c5904-0fe4-408c-bf65-439e420218f8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.980224 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="754c5904-0fe4-408c-bf65-439e420218f8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:37:40 crc kubenswrapper[4687]: E0312 16:37:40.980251 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="registry-server" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.980257 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="registry-server" Mar 12 16:37:40 crc kubenswrapper[4687]: E0312 16:37:40.980280 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="extract-content" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.980286 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="extract-content" Mar 12 16:37:40 crc kubenswrapper[4687]: E0312 16:37:40.980302 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="extract-utilities" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.980308 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="extract-utilities" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.980544 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4b07aa-c412-4479-960e-1f79f4e68d96" containerName="registry-server" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.980568 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="754c5904-0fe4-408c-bf65-439e420218f8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:37:40 crc kubenswrapper[4687]: I0312 16:37:40.981609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.005156 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhxs9"] Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.110914 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam\") pod \"754c5904-0fe4-408c-bf65-439e420218f8\" (UID: \"754c5904-0fe4-408c-bf65-439e420218f8\") " Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.111591 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.111626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.111698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlr5v\" (UniqueName: \"kubernetes.io/projected/bdec614b-1196-4d68-afa3-42f42fe2d7dc-kube-api-access-rlr5v\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.119469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "754c5904-0fe4-408c-bf65-439e420218f8" (UID: "754c5904-0fe4-408c-bf65-439e420218f8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.214255 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.214477 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.214604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlr5v\" (UniqueName: \"kubernetes.io/projected/bdec614b-1196-4d68-afa3-42f42fe2d7dc-kube-api-access-rlr5v\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.214913 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/754c5904-0fe4-408c-bf65-439e420218f8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.218902 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.220197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.231752 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlr5v\" (UniqueName: \"kubernetes.io/projected/bdec614b-1196-4d68-afa3-42f42fe2d7dc-kube-api-access-rlr5v\") pod \"ssh-known-hosts-edpm-deployment-vhxs9\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.313847 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:41 crc kubenswrapper[4687]: I0312 16:37:41.914414 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhxs9"] Mar 12 16:37:42 crc kubenswrapper[4687]: I0312 16:37:42.920826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" event={"ID":"bdec614b-1196-4d68-afa3-42f42fe2d7dc","Type":"ContainerStarted","Data":"0a1905114afbb717a04c817de11c78e26b30387a90418f5f84999cfbd128bf94"} Mar 12 16:37:42 crc kubenswrapper[4687]: I0312 16:37:42.921461 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" event={"ID":"bdec614b-1196-4d68-afa3-42f42fe2d7dc","Type":"ContainerStarted","Data":"98f7cd960c7646e46b5ec8688850d4de1c27995d7b325f91d4ae7ce02febe6f1"} Mar 12 16:37:42 crc kubenswrapper[4687]: I0312 16:37:42.941736 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" podStartSLOduration=2.446343402 podStartE2EDuration="2.941713079s" podCreationTimestamp="2026-03-12 16:37:40 +0000 UTC" firstStartedPulling="2026-03-12 16:37:41.929629309 +0000 UTC m=+2110.893591673" lastFinishedPulling="2026-03-12 16:37:42.424998966 +0000 UTC m=+2111.388961350" observedRunningTime="2026-03-12 16:37:42.935968082 +0000 UTC m=+2111.899930446" watchObservedRunningTime="2026-03-12 16:37:42.941713079 +0000 UTC m=+2111.905675443" Mar 12 16:37:52 crc kubenswrapper[4687]: I0312 16:37:52.021726 4687 generic.go:334] "Generic (PLEG): container finished" podID="bdec614b-1196-4d68-afa3-42f42fe2d7dc" containerID="0a1905114afbb717a04c817de11c78e26b30387a90418f5f84999cfbd128bf94" exitCode=0 Mar 12 16:37:52 crc kubenswrapper[4687]: I0312 16:37:52.022199 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" event={"ID":"bdec614b-1196-4d68-afa3-42f42fe2d7dc","Type":"ContainerDied","Data":"0a1905114afbb717a04c817de11c78e26b30387a90418f5f84999cfbd128bf94"} Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.604031 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.731701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-inventory-0\") pod \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.731745 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlr5v\" (UniqueName: \"kubernetes.io/projected/bdec614b-1196-4d68-afa3-42f42fe2d7dc-kube-api-access-rlr5v\") pod \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.732008 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-ssh-key-openstack-edpm-ipam\") pod \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\" (UID: \"bdec614b-1196-4d68-afa3-42f42fe2d7dc\") " Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.738007 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdec614b-1196-4d68-afa3-42f42fe2d7dc-kube-api-access-rlr5v" (OuterVolumeSpecName: "kube-api-access-rlr5v") pod "bdec614b-1196-4d68-afa3-42f42fe2d7dc" (UID: "bdec614b-1196-4d68-afa3-42f42fe2d7dc"). InnerVolumeSpecName "kube-api-access-rlr5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.766870 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "bdec614b-1196-4d68-afa3-42f42fe2d7dc" (UID: "bdec614b-1196-4d68-afa3-42f42fe2d7dc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.781380 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdec614b-1196-4d68-afa3-42f42fe2d7dc" (UID: "bdec614b-1196-4d68-afa3-42f42fe2d7dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.834761 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.834801 4687 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/bdec614b-1196-4d68-afa3-42f42fe2d7dc-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:53 crc kubenswrapper[4687]: I0312 16:37:53.834815 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlr5v\" (UniqueName: \"kubernetes.io/projected/bdec614b-1196-4d68-afa3-42f42fe2d7dc-kube-api-access-rlr5v\") on node \"crc\" DevicePath \"\"" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.043469 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" event={"ID":"bdec614b-1196-4d68-afa3-42f42fe2d7dc","Type":"ContainerDied","Data":"98f7cd960c7646e46b5ec8688850d4de1c27995d7b325f91d4ae7ce02febe6f1"} Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.043818 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f7cd960c7646e46b5ec8688850d4de1c27995d7b325f91d4ae7ce02febe6f1" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.043531 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhxs9" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.140763 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2"] Mar 12 16:37:54 crc kubenswrapper[4687]: E0312 16:37:54.141315 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdec614b-1196-4d68-afa3-42f42fe2d7dc" containerName="ssh-known-hosts-edpm-deployment" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.141331 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdec614b-1196-4d68-afa3-42f42fe2d7dc" containerName="ssh-known-hosts-edpm-deployment" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.141596 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdec614b-1196-4d68-afa3-42f42fe2d7dc" containerName="ssh-known-hosts-edpm-deployment" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.142492 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.147166 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.147732 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.147928 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.150549 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.151652 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2"] Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.243593 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tw6n\" (UniqueName: \"kubernetes.io/projected/c0fd04af-824b-4f03-a28c-cfa84d27c015-kube-api-access-8tw6n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.243883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.244115 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.346315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.346721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tw6n\" (UniqueName: \"kubernetes.io/projected/c0fd04af-824b-4f03-a28c-cfa84d27c015-kube-api-access-8tw6n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.346836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.353275 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.353769 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.383913 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tw6n\" (UniqueName: \"kubernetes.io/projected/c0fd04af-824b-4f03-a28c-cfa84d27c015-kube-api-access-8tw6n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9cbz2\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:54 crc kubenswrapper[4687]: I0312 16:37:54.478402 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:37:55 crc kubenswrapper[4687]: I0312 16:37:55.027145 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2"] Mar 12 16:37:55 crc kubenswrapper[4687]: I0312 16:37:55.056862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" event={"ID":"c0fd04af-824b-4f03-a28c-cfa84d27c015","Type":"ContainerStarted","Data":"2afec6637a7f72cfe099cbbc57d124679133f469df60d10b7920a8656e36bdc2"} Mar 12 16:37:56 crc kubenswrapper[4687]: I0312 16:37:56.068427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" event={"ID":"c0fd04af-824b-4f03-a28c-cfa84d27c015","Type":"ContainerStarted","Data":"99da8dbdf83968e69e5bf03b7ff507c2714fb5dbf124df83d0c3cdf665897996"} Mar 12 16:37:56 crc kubenswrapper[4687]: I0312 16:37:56.096685 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" podStartSLOduration=1.6738429030000002 podStartE2EDuration="2.096668434s" podCreationTimestamp="2026-03-12 16:37:54 +0000 UTC" firstStartedPulling="2026-03-12 16:37:55.033381591 +0000 UTC m=+2123.997343935" lastFinishedPulling="2026-03-12 16:37:55.456207112 +0000 UTC m=+2124.420169466" observedRunningTime="2026-03-12 16:37:56.091182534 +0000 UTC m=+2125.055144918" watchObservedRunningTime="2026-03-12 16:37:56.096668434 +0000 UTC m=+2125.060630778" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.137223 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555558-796tg"] Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.140427 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.143418 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.144247 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.144462 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.159546 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555558-796tg"] Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.200702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txn6j\" (UniqueName: \"kubernetes.io/projected/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa-kube-api-access-txn6j\") pod \"auto-csr-approver-29555558-796tg\" (UID: \"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa\") " pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.303918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txn6j\" (UniqueName: \"kubernetes.io/projected/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa-kube-api-access-txn6j\") pod \"auto-csr-approver-29555558-796tg\" (UID: \"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa\") " pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.340005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txn6j\" (UniqueName: \"kubernetes.io/projected/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa-kube-api-access-txn6j\") pod \"auto-csr-approver-29555558-796tg\" (UID: \"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa\") " pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:00 crc kubenswrapper[4687]: I0312 16:38:00.468904 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:01 crc kubenswrapper[4687]: I0312 16:38:01.124432 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555558-796tg"] Mar 12 16:38:01 crc kubenswrapper[4687]: I0312 16:38:01.139448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555558-796tg" event={"ID":"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa","Type":"ContainerStarted","Data":"75f0306858c3937c31e7a0d8ff3723d43d0b50548870619d7992d5378c9aa5db"} Mar 12 16:38:03 crc kubenswrapper[4687]: I0312 16:38:03.161224 4687 generic.go:334] "Generic (PLEG): container finished" podID="4ed5ac57-3c74-4f30-a1d2-90ad98c470fa" containerID="248274be69d61ddf968173d9e2c6045dd651bf6c0480731415b33f2d8d04aee9" exitCode=0 Mar 12 16:38:03 crc kubenswrapper[4687]: I0312 16:38:03.161723 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555558-796tg" event={"ID":"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa","Type":"ContainerDied","Data":"248274be69d61ddf968173d9e2c6045dd651bf6c0480731415b33f2d8d04aee9"} Mar 12 16:38:04 crc kubenswrapper[4687]: I0312 16:38:04.176039 4687 generic.go:334] "Generic (PLEG): container finished" podID="c0fd04af-824b-4f03-a28c-cfa84d27c015" containerID="99da8dbdf83968e69e5bf03b7ff507c2714fb5dbf124df83d0c3cdf665897996" exitCode=0 Mar 12 16:38:04 crc kubenswrapper[4687]: I0312 16:38:04.176222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" event={"ID":"c0fd04af-824b-4f03-a28c-cfa84d27c015","Type":"ContainerDied","Data":"99da8dbdf83968e69e5bf03b7ff507c2714fb5dbf124df83d0c3cdf665897996"} Mar 12 16:38:04 crc kubenswrapper[4687]: I0312 16:38:04.570659 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:04 crc kubenswrapper[4687]: I0312 16:38:04.623346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txn6j\" (UniqueName: \"kubernetes.io/projected/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa-kube-api-access-txn6j\") pod \"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa\" (UID: \"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa\") " Mar 12 16:38:04 crc kubenswrapper[4687]: I0312 16:38:04.629108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa-kube-api-access-txn6j" (OuterVolumeSpecName: "kube-api-access-txn6j") pod "4ed5ac57-3c74-4f30-a1d2-90ad98c470fa" (UID: "4ed5ac57-3c74-4f30-a1d2-90ad98c470fa"). InnerVolumeSpecName "kube-api-access-txn6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:38:04 crc kubenswrapper[4687]: I0312 16:38:04.726114 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txn6j\" (UniqueName: \"kubernetes.io/projected/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa-kube-api-access-txn6j\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.190859 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555558-796tg" event={"ID":"4ed5ac57-3c74-4f30-a1d2-90ad98c470fa","Type":"ContainerDied","Data":"75f0306858c3937c31e7a0d8ff3723d43d0b50548870619d7992d5378c9aa5db"} Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.190906 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f0306858c3937c31e7a0d8ff3723d43d0b50548870619d7992d5378c9aa5db" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.190947 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555558-796tg" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.639275 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555552-xcqt2"] Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.649232 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555552-xcqt2"] Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.658776 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.746469 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4248d8-d237-4160-9d5a-8f7d232e252b" path="/var/lib/kubelet/pods/dd4248d8-d237-4160-9d5a-8f7d232e252b/volumes" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.750212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-ssh-key-openstack-edpm-ipam\") pod \"c0fd04af-824b-4f03-a28c-cfa84d27c015\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.750327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-inventory\") pod \"c0fd04af-824b-4f03-a28c-cfa84d27c015\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.750404 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tw6n\" (UniqueName: \"kubernetes.io/projected/c0fd04af-824b-4f03-a28c-cfa84d27c015-kube-api-access-8tw6n\") pod \"c0fd04af-824b-4f03-a28c-cfa84d27c015\" (UID: \"c0fd04af-824b-4f03-a28c-cfa84d27c015\") " Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.755822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fd04af-824b-4f03-a28c-cfa84d27c015-kube-api-access-8tw6n" (OuterVolumeSpecName: "kube-api-access-8tw6n") pod "c0fd04af-824b-4f03-a28c-cfa84d27c015" (UID: "c0fd04af-824b-4f03-a28c-cfa84d27c015"). InnerVolumeSpecName "kube-api-access-8tw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.785632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-inventory" (OuterVolumeSpecName: "inventory") pod "c0fd04af-824b-4f03-a28c-cfa84d27c015" (UID: "c0fd04af-824b-4f03-a28c-cfa84d27c015"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.786674 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0fd04af-824b-4f03-a28c-cfa84d27c015" (UID: "c0fd04af-824b-4f03-a28c-cfa84d27c015"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.853864 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.853894 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tw6n\" (UniqueName: \"kubernetes.io/projected/c0fd04af-824b-4f03-a28c-cfa84d27c015-kube-api-access-8tw6n\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:05 crc kubenswrapper[4687]: I0312 16:38:05.853904 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd04af-824b-4f03-a28c-cfa84d27c015-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.204668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" event={"ID":"c0fd04af-824b-4f03-a28c-cfa84d27c015","Type":"ContainerDied","Data":"2afec6637a7f72cfe099cbbc57d124679133f469df60d10b7920a8656e36bdc2"} Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.204981 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2afec6637a7f72cfe099cbbc57d124679133f469df60d10b7920a8656e36bdc2" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.205044 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9cbz2" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.297331 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq"] Mar 12 16:38:06 crc kubenswrapper[4687]: E0312 16:38:06.298161 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed5ac57-3c74-4f30-a1d2-90ad98c470fa" containerName="oc" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.298231 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed5ac57-3c74-4f30-a1d2-90ad98c470fa" containerName="oc" Mar 12 16:38:06 crc kubenswrapper[4687]: E0312 16:38:06.298322 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fd04af-824b-4f03-a28c-cfa84d27c015" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.298393 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fd04af-824b-4f03-a28c-cfa84d27c015" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.298642 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fd04af-824b-4f03-a28c-cfa84d27c015" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.298735 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed5ac57-3c74-4f30-a1d2-90ad98c470fa" containerName="oc" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.299691 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.302509 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.302677 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.302788 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.303063 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.326960 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq"] Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.365095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.365259 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.365283 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7pz\" (UniqueName: \"kubernetes.io/projected/ad369604-a912-475e-9904-5e8e4aa03271-kube-api-access-vl7pz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.467252 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.467308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7pz\" (UniqueName: \"kubernetes.io/projected/ad369604-a912-475e-9904-5e8e4aa03271-kube-api-access-vl7pz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.467457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.473212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.484059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.485149 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7pz\" (UniqueName: \"kubernetes.io/projected/ad369604-a912-475e-9904-5e8e4aa03271-kube-api-access-vl7pz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:06 crc kubenswrapper[4687]: I0312 16:38:06.624383 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:07 crc kubenswrapper[4687]: I0312 16:38:07.150923 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq"] Mar 12 16:38:07 crc kubenswrapper[4687]: I0312 16:38:07.215557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" event={"ID":"ad369604-a912-475e-9904-5e8e4aa03271","Type":"ContainerStarted","Data":"fb586a21607a511ea1dcf4a09a55b0b9560f3af2430168a678ceb89764fd0817"} Mar 12 16:38:08 crc kubenswrapper[4687]: I0312 16:38:08.230456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" event={"ID":"ad369604-a912-475e-9904-5e8e4aa03271","Type":"ContainerStarted","Data":"d63a4e9a0ba31438c564aee36aa4041562ecca9876c22160845d7cbf7f5f1a12"} Mar 12 16:38:08 crc kubenswrapper[4687]: I0312 16:38:08.257527 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" podStartSLOduration=1.7724291330000002 podStartE2EDuration="2.257502019s" podCreationTimestamp="2026-03-12 16:38:06 +0000 UTC" firstStartedPulling="2026-03-12 16:38:07.148431433 +0000 UTC m=+2136.112393767" lastFinishedPulling="2026-03-12 16:38:07.633504309 +0000 UTC m=+2136.597466653" observedRunningTime="2026-03-12 16:38:08.247759002 +0000 UTC m=+2137.211721336" watchObservedRunningTime="2026-03-12 16:38:08.257502019 +0000 UTC m=+2137.221464403" Mar 12 16:38:11 crc kubenswrapper[4687]: I0312 16:38:11.046936 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7cr7"] Mar 12 16:38:11 crc kubenswrapper[4687]: I0312 16:38:11.057345 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7cr7"] Mar 12 16:38:11 crc kubenswrapper[4687]: I0312 16:38:11.756681 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6ae379-ebea-4caf-819d-2c566201e67a" path="/var/lib/kubelet/pods/af6ae379-ebea-4caf-819d-2c566201e67a/volumes" Mar 12 16:38:17 crc kubenswrapper[4687]: I0312 16:38:17.330133 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad369604-a912-475e-9904-5e8e4aa03271" containerID="d63a4e9a0ba31438c564aee36aa4041562ecca9876c22160845d7cbf7f5f1a12" exitCode=0 Mar 12 16:38:17 crc kubenswrapper[4687]: I0312 16:38:17.330216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" event={"ID":"ad369604-a912-475e-9904-5e8e4aa03271","Type":"ContainerDied","Data":"d63a4e9a0ba31438c564aee36aa4041562ecca9876c22160845d7cbf7f5f1a12"} Mar 12 16:38:18 crc kubenswrapper[4687]: I0312 16:38:18.853250 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:18 crc kubenswrapper[4687]: I0312 16:38:18.998005 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7pz\" (UniqueName: \"kubernetes.io/projected/ad369604-a912-475e-9904-5e8e4aa03271-kube-api-access-vl7pz\") pod \"ad369604-a912-475e-9904-5e8e4aa03271\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " Mar 12 16:38:18 crc kubenswrapper[4687]: I0312 16:38:18.998451 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-inventory\") pod \"ad369604-a912-475e-9904-5e8e4aa03271\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " Mar 12 16:38:18 crc kubenswrapper[4687]: I0312 16:38:18.998628 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-ssh-key-openstack-edpm-ipam\") pod \"ad369604-a912-475e-9904-5e8e4aa03271\" (UID: \"ad369604-a912-475e-9904-5e8e4aa03271\") " Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.007619 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad369604-a912-475e-9904-5e8e4aa03271-kube-api-access-vl7pz" (OuterVolumeSpecName: "kube-api-access-vl7pz") pod "ad369604-a912-475e-9904-5e8e4aa03271" (UID: "ad369604-a912-475e-9904-5e8e4aa03271"). InnerVolumeSpecName "kube-api-access-vl7pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.044548 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-inventory" (OuterVolumeSpecName: "inventory") pod "ad369604-a912-475e-9904-5e8e4aa03271" (UID: "ad369604-a912-475e-9904-5e8e4aa03271"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.047342 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad369604-a912-475e-9904-5e8e4aa03271" (UID: "ad369604-a912-475e-9904-5e8e4aa03271"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.102087 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.102128 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7pz\" (UniqueName: \"kubernetes.io/projected/ad369604-a912-475e-9904-5e8e4aa03271-kube-api-access-vl7pz\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.102138 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad369604-a912-475e-9904-5e8e4aa03271-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.350487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" event={"ID":"ad369604-a912-475e-9904-5e8e4aa03271","Type":"ContainerDied","Data":"fb586a21607a511ea1dcf4a09a55b0b9560f3af2430168a678ceb89764fd0817"} Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.350762 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb586a21607a511ea1dcf4a09a55b0b9560f3af2430168a678ceb89764fd0817" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.350525 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.525677 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk"] Mar 12 16:38:19 crc kubenswrapper[4687]: E0312 16:38:19.526153 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad369604-a912-475e-9904-5e8e4aa03271" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.526172 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad369604-a912-475e-9904-5e8e4aa03271" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.526382 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad369604-a912-475e-9904-5e8e4aa03271" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.527194 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.528848 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.529189 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.529216 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.529339 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.530639 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.531122 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.531536 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.531680 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.536215 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.549823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk"] Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614282 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614307 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614340 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614516 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614581 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614674 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vlb\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-kube-api-access-g9vlb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.614974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615030 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615063 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615127 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615280 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615303 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.615471 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.718090 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.718166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.718214 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.718241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.718262 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719037 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719193 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719223 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vlb\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-kube-api-access-g9vlb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719293 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719350 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719394 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.719483 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.722028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.722139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.722303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.722476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.723078 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.723687 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.723960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.724586 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.725427 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.725634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.725660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.726827 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.727101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.728271 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.728547 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.741201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vlb\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-kube-api-access-g9vlb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-557tk\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:19 crc kubenswrapper[4687]: I0312 16:38:19.847352 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:38:20 crc kubenswrapper[4687]: I0312 16:38:20.377670 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk"] Mar 12 16:38:20 crc kubenswrapper[4687]: W0312 16:38:20.385380 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3338fca0_a722_4b15_8422_f36e65ad1a2b.slice/crio-8c930b44476c776561520c494a667f635cbb5e63c99c724e2c061c0e87fa4e0d WatchSource:0}: Error finding container 8c930b44476c776561520c494a667f635cbb5e63c99c724e2c061c0e87fa4e0d: Status 404 returned error can't find the container with id 8c930b44476c776561520c494a667f635cbb5e63c99c724e2c061c0e87fa4e0d Mar 12 16:38:21 crc kubenswrapper[4687]: I0312 16:38:21.373108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" event={"ID":"3338fca0-a722-4b15-8422-f36e65ad1a2b","Type":"ContainerStarted","Data":"1d5ca54899ed083a17f88a4520eb52ee4f1d3a42ff68c76ea76fb23012816fa3"} Mar 12 16:38:21 crc kubenswrapper[4687]: I0312 16:38:21.373915 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" event={"ID":"3338fca0-a722-4b15-8422-f36e65ad1a2b","Type":"ContainerStarted","Data":"8c930b44476c776561520c494a667f635cbb5e63c99c724e2c061c0e87fa4e0d"} Mar 12 16:38:21 crc kubenswrapper[4687]: I0312 16:38:21.402637 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" podStartSLOduration=1.9378820970000001 podStartE2EDuration="2.402614525s" podCreationTimestamp="2026-03-12 16:38:19 +0000 UTC" firstStartedPulling="2026-03-12 16:38:20.389051684 +0000 UTC m=+2149.353014028" lastFinishedPulling="2026-03-12 16:38:20.853784082 +0000 UTC m=+2149.817746456" observedRunningTime="2026-03-12 16:38:21.391753567 +0000 UTC m=+2150.355715931" watchObservedRunningTime="2026-03-12 16:38:21.402614525 +0000 UTC m=+2150.366576889" Mar 12 16:38:32 crc kubenswrapper[4687]: I0312 16:38:32.873501 4687 scope.go:117] "RemoveContainer" containerID="378e3190178324307210a5cdd0ebac2f75371511d80cff86703c532d94cb282f" Mar 12 16:38:32 crc kubenswrapper[4687]: I0312 16:38:32.945187 4687 scope.go:117] "RemoveContainer" containerID="149578c8036321e87943445e8117d2c7e4e6d2daa770b9a61319b32f4a0324cf" Mar 12 16:38:57 crc kubenswrapper[4687]: I0312 16:38:57.993193 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rg8ww"] Mar 12 16:38:57 crc kubenswrapper[4687]: I0312 16:38:57.996058 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.010786 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rg8ww"] Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.083961 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-utilities\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.084570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2jzv\" (UniqueName: \"kubernetes.io/projected/2b34c42c-ede5-4a8d-b3a9-20068603333e-kube-api-access-j2jzv\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.084629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-catalog-content\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.186820 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2jzv\" (UniqueName: \"kubernetes.io/projected/2b34c42c-ede5-4a8d-b3a9-20068603333e-kube-api-access-j2jzv\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.186884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-catalog-content\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.186990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-utilities\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.187516 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-catalog-content\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.187624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-utilities\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.226322 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2jzv\" (UniqueName: \"kubernetes.io/projected/2b34c42c-ede5-4a8d-b3a9-20068603333e-kube-api-access-j2jzv\") pod \"community-operators-rg8ww\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.321521 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:38:58 crc kubenswrapper[4687]: I0312 16:38:58.866111 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rg8ww"] Mar 12 16:38:59 crc kubenswrapper[4687]: I0312 16:38:59.857034 4687 generic.go:334] "Generic (PLEG): container finished" podID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerID="5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab" exitCode=0 Mar 12 16:38:59 crc kubenswrapper[4687]: I0312 16:38:59.857122 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerDied","Data":"5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab"} Mar 12 16:38:59 crc kubenswrapper[4687]: I0312 16:38:59.857344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerStarted","Data":"ae08f002e2418ff0b65a4ef4bb800def0958e6eb7126801217bd98c73b0abc47"} Mar 12 16:39:01 crc kubenswrapper[4687]: I0312 16:39:01.881474 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerStarted","Data":"9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4"} Mar 12 16:39:02 crc kubenswrapper[4687]: I0312 16:39:02.893002 4687 generic.go:334] "Generic (PLEG): container finished" podID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerID="9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4" exitCode=0 Mar 12 16:39:02 crc kubenswrapper[4687]: I0312 16:39:02.893148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerDied","Data":"9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4"} Mar 12 16:39:03 crc kubenswrapper[4687]: I0312 16:39:03.906208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerStarted","Data":"859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122"} Mar 12 16:39:03 crc kubenswrapper[4687]: I0312 16:39:03.925084 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rg8ww" podStartSLOduration=3.25850888 podStartE2EDuration="6.925064644s" podCreationTimestamp="2026-03-12 16:38:57 +0000 UTC" firstStartedPulling="2026-03-12 16:38:59.861988459 +0000 UTC m=+2188.825950803" lastFinishedPulling="2026-03-12 16:39:03.528544223 +0000 UTC m=+2192.492506567" observedRunningTime="2026-03-12 16:39:03.923056938 +0000 UTC m=+2192.887019292" watchObservedRunningTime="2026-03-12 16:39:03.925064644 +0000 UTC m=+2192.889026988" Mar 12 16:39:04 crc kubenswrapper[4687]: I0312 16:39:04.917559 4687 generic.go:334] "Generic (PLEG): container finished" podID="3338fca0-a722-4b15-8422-f36e65ad1a2b" containerID="1d5ca54899ed083a17f88a4520eb52ee4f1d3a42ff68c76ea76fb23012816fa3" exitCode=0 Mar 12 16:39:04 crc kubenswrapper[4687]: I0312 16:39:04.917602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" event={"ID":"3338fca0-a722-4b15-8422-f36e65ad1a2b","Type":"ContainerDied","Data":"1d5ca54899ed083a17f88a4520eb52ee4f1d3a42ff68c76ea76fb23012816fa3"} Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.452755 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509589 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-inventory\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509671 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9vlb\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-kube-api-access-g9vlb\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509729 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-libvirt-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509845 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509870 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509901 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509926 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-bootstrap-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509960 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-nova-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.509992 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-power-monitoring-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.510033 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.510083 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ssh-key-openstack-edpm-ipam\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.510182 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-repo-setup-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.510246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-neutron-metadata-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.510306 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ovn-combined-ca-bundle\") pod \"3338fca0-a722-4b15-8422-f36e65ad1a2b\" (UID: \"3338fca0-a722-4b15-8422-f36e65ad1a2b\") " Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523629 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523677 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523766 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.523960 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.525344 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.526588 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.526600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.526721 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.536841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.548227 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-kube-api-access-g9vlb" (OuterVolumeSpecName: "kube-api-access-g9vlb") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "kube-api-access-g9vlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.551609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.557035 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-inventory" (OuterVolumeSpecName: "inventory") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.565750 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3338fca0-a722-4b15-8422-f36e65ad1a2b" (UID: "3338fca0-a722-4b15-8422-f36e65ad1a2b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613864 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613907 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613920 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613931 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613942 4687 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613952 4687 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613963 4687 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613972 4687 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613981 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.613992 4687 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.614004 4687 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.614022 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.614034 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.614044 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.614054 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9vlb\" (UniqueName: \"kubernetes.io/projected/3338fca0-a722-4b15-8422-f36e65ad1a2b-kube-api-access-g9vlb\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.614063 4687 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3338fca0-a722-4b15-8422-f36e65ad1a2b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.963236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" event={"ID":"3338fca0-a722-4b15-8422-f36e65ad1a2b","Type":"ContainerDied","Data":"8c930b44476c776561520c494a667f635cbb5e63c99c724e2c061c0e87fa4e0d"} Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.963277 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c930b44476c776561520c494a667f635cbb5e63c99c724e2c061c0e87fa4e0d" Mar 12 16:39:06 crc kubenswrapper[4687]: I0312 16:39:06.963302 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-557tk" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.079808 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j"] Mar 12 16:39:07 crc kubenswrapper[4687]: E0312 16:39:07.080384 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3338fca0-a722-4b15-8422-f36e65ad1a2b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.080409 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3338fca0-a722-4b15-8422-f36e65ad1a2b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.080618 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3338fca0-a722-4b15-8422-f36e65ad1a2b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.081438 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.085484 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.085969 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.085972 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.086289 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.086433 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.099468 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j"] Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.229665 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.229965 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.230020 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.230150 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.230190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtfh\" (UniqueName: \"kubernetes.io/projected/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-kube-api-access-qhtfh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.332573 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.332621 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtfh\" (UniqueName: \"kubernetes.io/projected/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-kube-api-access-qhtfh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.332800 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.332905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.332975 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.334044 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.336862 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.337025 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.337538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.350800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtfh\" (UniqueName: \"kubernetes.io/projected/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-kube-api-access-qhtfh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlb4j\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.396456 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:39:07 crc kubenswrapper[4687]: W0312 16:39:07.963922 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde0d3f98_6bf7_432a_ab9c_5be397f44fc2.slice/crio-32f0bcda0433873632461556ec3185ff50b97094e93f1f64abb80329f42e3559 WatchSource:0}: Error finding container 32f0bcda0433873632461556ec3185ff50b97094e93f1f64abb80329f42e3559: Status 404 returned error can't find the container with id 32f0bcda0433873632461556ec3185ff50b97094e93f1f64abb80329f42e3559 Mar 12 16:39:07 crc kubenswrapper[4687]: I0312 16:39:07.973090 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j"] Mar 12 16:39:08 crc kubenswrapper[4687]: I0312 16:39:08.323397 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:39:08 crc kubenswrapper[4687]: I0312 16:39:08.323703 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:39:08 crc kubenswrapper[4687]: I0312 16:39:08.383712 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:39:08 crc kubenswrapper[4687]: I0312 16:39:08.998755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" event={"ID":"de0d3f98-6bf7-432a-ab9c-5be397f44fc2","Type":"ContainerStarted","Data":"099f3cba9ffe4fdeaf66662832049990eb90eda9b6cb65143eca5060ba95d87a"} Mar 12 16:39:09 crc kubenswrapper[4687]: I0312 16:39:09.001136 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" event={"ID":"de0d3f98-6bf7-432a-ab9c-5be397f44fc2","Type":"ContainerStarted","Data":"32f0bcda0433873632461556ec3185ff50b97094e93f1f64abb80329f42e3559"} Mar 12 16:39:09 crc kubenswrapper[4687]: I0312 16:39:09.024799 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" podStartSLOduration=1.54867469 podStartE2EDuration="2.02475257s" podCreationTimestamp="2026-03-12 16:39:07 +0000 UTC" firstStartedPulling="2026-03-12 16:39:07.966984588 +0000 UTC m=+2196.930946932" lastFinishedPulling="2026-03-12 16:39:08.443062468 +0000 UTC m=+2197.407024812" observedRunningTime="2026-03-12 16:39:09.018821558 +0000 UTC m=+2197.982783902" watchObservedRunningTime="2026-03-12 16:39:09.02475257 +0000 UTC m=+2197.988714914" Mar 12 16:39:09 crc kubenswrapper[4687]: I0312 16:39:09.055544 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:39:09 crc kubenswrapper[4687]: I0312 16:39:09.104633 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rg8ww"] Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.016970 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rg8ww" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="registry-server" containerID="cri-o://859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122" gracePeriod=2 Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.541676 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.649114 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-utilities\") pod \"2b34c42c-ede5-4a8d-b3a9-20068603333e\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.649437 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2jzv\" (UniqueName: \"kubernetes.io/projected/2b34c42c-ede5-4a8d-b3a9-20068603333e-kube-api-access-j2jzv\") pod \"2b34c42c-ede5-4a8d-b3a9-20068603333e\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.649812 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-catalog-content\") pod \"2b34c42c-ede5-4a8d-b3a9-20068603333e\" (UID: \"2b34c42c-ede5-4a8d-b3a9-20068603333e\") " Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.655600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-utilities" (OuterVolumeSpecName: "utilities") pod "2b34c42c-ede5-4a8d-b3a9-20068603333e" (UID: "2b34c42c-ede5-4a8d-b3a9-20068603333e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.656029 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b34c42c-ede5-4a8d-b3a9-20068603333e-kube-api-access-j2jzv" (OuterVolumeSpecName: "kube-api-access-j2jzv") pod "2b34c42c-ede5-4a8d-b3a9-20068603333e" (UID: "2b34c42c-ede5-4a8d-b3a9-20068603333e"). InnerVolumeSpecName "kube-api-access-j2jzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.704840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b34c42c-ede5-4a8d-b3a9-20068603333e" (UID: "2b34c42c-ede5-4a8d-b3a9-20068603333e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.751894 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.751924 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b34c42c-ede5-4a8d-b3a9-20068603333e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:11 crc kubenswrapper[4687]: I0312 16:39:11.751935 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2jzv\" (UniqueName: \"kubernetes.io/projected/2b34c42c-ede5-4a8d-b3a9-20068603333e-kube-api-access-j2jzv\") on node \"crc\" DevicePath \"\"" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.030172 4687 generic.go:334] "Generic (PLEG): container finished" podID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerID="859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122" exitCode=0 Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.030219 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerDied","Data":"859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122"} Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.030249 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rg8ww" event={"ID":"2b34c42c-ede5-4a8d-b3a9-20068603333e","Type":"ContainerDied","Data":"ae08f002e2418ff0b65a4ef4bb800def0958e6eb7126801217bd98c73b0abc47"} Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.030268 4687 scope.go:117] "RemoveContainer" containerID="859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.030322 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rg8ww" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.055322 4687 scope.go:117] "RemoveContainer" containerID="9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.072927 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rg8ww"] Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.074016 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rg8ww"] Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.084551 4687 scope.go:117] "RemoveContainer" containerID="5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.141463 4687 scope.go:117] "RemoveContainer" containerID="859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122" Mar 12 16:39:12 crc kubenswrapper[4687]: E0312 16:39:12.141962 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122\": container with ID starting with 859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122 not found: ID does not exist" containerID="859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.141991 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122"} err="failed to get container status \"859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122\": rpc error: code = NotFound desc = could not find container \"859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122\": container with ID starting with 859bcce1310668d931a3f939c3b89c7d80d219644716c125863da9c8c3b21122 not found: ID does not exist" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.142011 4687 scope.go:117] "RemoveContainer" containerID="9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4" Mar 12 16:39:12 crc kubenswrapper[4687]: E0312 16:39:12.142369 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4\": container with ID starting with 9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4 not found: ID does not exist" containerID="9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.142404 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4"} err="failed to get container status \"9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4\": rpc error: code = NotFound desc = could not find container \"9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4\": container with ID starting with 9ae3c3e14142d462aa45b669c0d604d7659788dea347170b22c84931014bbbc4 not found: ID does not exist" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.142419 4687 scope.go:117] "RemoveContainer" containerID="5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab" Mar 12 16:39:12 crc kubenswrapper[4687]: E0312 16:39:12.143019 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab\": container with ID starting with 5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab not found: ID does not exist" containerID="5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab" Mar 12 16:39:12 crc kubenswrapper[4687]: I0312 16:39:12.143074 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab"} err="failed to get container status \"5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab\": rpc error: code = NotFound desc = could not find container \"5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab\": container with ID starting with 5bd7e56ef098365afe968a8d17ded981c5df923298cd647bab75933589aabcab not found: ID does not exist" Mar 12 16:39:13 crc kubenswrapper[4687]: I0312 16:39:13.745807 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" path="/var/lib/kubelet/pods/2b34c42c-ede5-4a8d-b3a9-20068603333e/volumes" Mar 12 16:39:14 crc kubenswrapper[4687]: I0312 16:39:14.121901 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:39:14 crc kubenswrapper[4687]: I0312 16:39:14.121959 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:39:29 crc kubenswrapper[4687]: I0312 16:39:29.056235 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-dv82n"] Mar 12 16:39:29 crc kubenswrapper[4687]: I0312 16:39:29.071788 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-dv82n"] Mar 12 16:39:29 crc kubenswrapper[4687]: I0312 16:39:29.753604 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f9932b-ed80-41a1-aea7-60b0466ebe7b" path="/var/lib/kubelet/pods/d5f9932b-ed80-41a1-aea7-60b0466ebe7b/volumes" Mar 12 16:39:33 crc kubenswrapper[4687]: I0312 16:39:33.055254 4687 scope.go:117] "RemoveContainer" containerID="8d62bdd938592fd042b95a13fec5a580d6c38e9bd755ec9cf81f56b00e12b846" Mar 12 16:39:44 crc kubenswrapper[4687]: I0312 16:39:44.122089 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:39:44 crc kubenswrapper[4687]: I0312 16:39:44.122660 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.151296 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555560-dlhfz"] Mar 12 16:40:00 crc kubenswrapper[4687]: E0312 16:40:00.152958 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="extract-utilities" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.152988 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="extract-utilities" Mar 12 16:40:00 crc kubenswrapper[4687]: E0312 16:40:00.153015 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="registry-server" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.153032 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="registry-server" Mar 12 16:40:00 crc kubenswrapper[4687]: E0312 16:40:00.153052 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="extract-content" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.153066 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="extract-content" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.153634 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b34c42c-ede5-4a8d-b3a9-20068603333e" containerName="registry-server" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.155994 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.160376 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.160417 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.160678 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.168109 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555560-dlhfz"] Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.255777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd58l\" (UniqueName: \"kubernetes.io/projected/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8-kube-api-access-hd58l\") pod \"auto-csr-approver-29555560-dlhfz\" (UID: \"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8\") " pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.358603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd58l\" (UniqueName: \"kubernetes.io/projected/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8-kube-api-access-hd58l\") pod \"auto-csr-approver-29555560-dlhfz\" (UID: \"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8\") " pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.380987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd58l\" (UniqueName: \"kubernetes.io/projected/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8-kube-api-access-hd58l\") pod \"auto-csr-approver-29555560-dlhfz\" (UID: \"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8\") " pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.477550 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:00 crc kubenswrapper[4687]: I0312 16:40:00.936797 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555560-dlhfz"] Mar 12 16:40:01 crc kubenswrapper[4687]: I0312 16:40:01.615334 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" event={"ID":"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8","Type":"ContainerStarted","Data":"603a0a718d7ac28b40bb13529f0e6811e681b20ffd9beb013faab2e470f41f61"} Mar 12 16:40:02 crc kubenswrapper[4687]: I0312 16:40:02.642755 4687 generic.go:334] "Generic (PLEG): container finished" podID="785c3ff1-a21e-4abf-a8a1-18f5c270b0a8" containerID="198009147b2b53efd5f4ccfe80435ac037301bbf7954d0592ace504e1d79a7fc" exitCode=0 Mar 12 16:40:02 crc kubenswrapper[4687]: I0312 16:40:02.642853 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" event={"ID":"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8","Type":"ContainerDied","Data":"198009147b2b53efd5f4ccfe80435ac037301bbf7954d0592ace504e1d79a7fc"} Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.073510 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.165772 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd58l\" (UniqueName: \"kubernetes.io/projected/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8-kube-api-access-hd58l\") pod \"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8\" (UID: \"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8\") " Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.171888 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8-kube-api-access-hd58l" (OuterVolumeSpecName: "kube-api-access-hd58l") pod "785c3ff1-a21e-4abf-a8a1-18f5c270b0a8" (UID: "785c3ff1-a21e-4abf-a8a1-18f5c270b0a8"). InnerVolumeSpecName "kube-api-access-hd58l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.269435 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd58l\" (UniqueName: \"kubernetes.io/projected/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8-kube-api-access-hd58l\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.664351 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" event={"ID":"785c3ff1-a21e-4abf-a8a1-18f5c270b0a8","Type":"ContainerDied","Data":"603a0a718d7ac28b40bb13529f0e6811e681b20ffd9beb013faab2e470f41f61"} Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.664426 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603a0a718d7ac28b40bb13529f0e6811e681b20ffd9beb013faab2e470f41f61" Mar 12 16:40:04 crc kubenswrapper[4687]: I0312 16:40:04.664435 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555560-dlhfz" Mar 12 16:40:05 crc kubenswrapper[4687]: I0312 16:40:05.140434 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555554-q5mmh"] Mar 12 16:40:05 crc kubenswrapper[4687]: I0312 16:40:05.152672 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555554-q5mmh"] Mar 12 16:40:05 crc kubenswrapper[4687]: I0312 16:40:05.749147 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ee8c3f-eebe-42fe-a34e-8e434b1b1444" path="/var/lib/kubelet/pods/78ee8c3f-eebe-42fe-a34e-8e434b1b1444/volumes" Mar 12 16:40:07 crc kubenswrapper[4687]: I0312 16:40:07.708471 4687 generic.go:334] "Generic (PLEG): container finished" podID="de0d3f98-6bf7-432a-ab9c-5be397f44fc2" containerID="099f3cba9ffe4fdeaf66662832049990eb90eda9b6cb65143eca5060ba95d87a" exitCode=0 Mar 12 16:40:07 crc kubenswrapper[4687]: I0312 16:40:07.708561 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" event={"ID":"de0d3f98-6bf7-432a-ab9c-5be397f44fc2","Type":"ContainerDied","Data":"099f3cba9ffe4fdeaf66662832049990eb90eda9b6cb65143eca5060ba95d87a"} Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.208088 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.284837 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-inventory\") pod \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.284891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovn-combined-ca-bundle\") pod \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.284932 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhtfh\" (UniqueName: \"kubernetes.io/projected/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-kube-api-access-qhtfh\") pod \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.285020 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovncontroller-config-0\") pod \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.285175 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ssh-key-openstack-edpm-ipam\") pod \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\" (UID: \"de0d3f98-6bf7-432a-ab9c-5be397f44fc2\") " Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.290750 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-kube-api-access-qhtfh" (OuterVolumeSpecName: "kube-api-access-qhtfh") pod "de0d3f98-6bf7-432a-ab9c-5be397f44fc2" (UID: "de0d3f98-6bf7-432a-ab9c-5be397f44fc2"). InnerVolumeSpecName "kube-api-access-qhtfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.292157 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "de0d3f98-6bf7-432a-ab9c-5be397f44fc2" (UID: "de0d3f98-6bf7-432a-ab9c-5be397f44fc2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.314277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "de0d3f98-6bf7-432a-ab9c-5be397f44fc2" (UID: "de0d3f98-6bf7-432a-ab9c-5be397f44fc2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.317207 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de0d3f98-6bf7-432a-ab9c-5be397f44fc2" (UID: "de0d3f98-6bf7-432a-ab9c-5be397f44fc2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.328239 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-inventory" (OuterVolumeSpecName: "inventory") pod "de0d3f98-6bf7-432a-ab9c-5be397f44fc2" (UID: "de0d3f98-6bf7-432a-ab9c-5be397f44fc2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.388469 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.388502 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.388515 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhtfh\" (UniqueName: \"kubernetes.io/projected/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-kube-api-access-qhtfh\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.388524 4687 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.388536 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de0d3f98-6bf7-432a-ab9c-5be397f44fc2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.729683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" event={"ID":"de0d3f98-6bf7-432a-ab9c-5be397f44fc2","Type":"ContainerDied","Data":"32f0bcda0433873632461556ec3185ff50b97094e93f1f64abb80329f42e3559"} Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.730003 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f0bcda0433873632461556ec3185ff50b97094e93f1f64abb80329f42e3559" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.729740 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlb4j" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.828294 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5"] Mar 12 16:40:09 crc kubenswrapper[4687]: E0312 16:40:09.828771 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0d3f98-6bf7-432a-ab9c-5be397f44fc2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.828786 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0d3f98-6bf7-432a-ab9c-5be397f44fc2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 16:40:09 crc kubenswrapper[4687]: E0312 16:40:09.828803 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785c3ff1-a21e-4abf-a8a1-18f5c270b0a8" containerName="oc" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.828809 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="785c3ff1-a21e-4abf-a8a1-18f5c270b0a8" containerName="oc" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.829033 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="785c3ff1-a21e-4abf-a8a1-18f5c270b0a8" containerName="oc" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.829056 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0d3f98-6bf7-432a-ab9c-5be397f44fc2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.829828 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.834233 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.834574 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.834618 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.834846 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.835043 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.835116 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.850920 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5"] Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.900790 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.900945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hds79\" (UniqueName: \"kubernetes.io/projected/ee50fab4-f911-4a8c-991d-c8ec9a408352-kube-api-access-hds79\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.901017 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.901097 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.901186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:09 crc kubenswrapper[4687]: I0312 16:40:09.901220 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.003178 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.003275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.003313 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.003332 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.003413 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.003536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hds79\" (UniqueName: \"kubernetes.io/projected/ee50fab4-f911-4a8c-991d-c8ec9a408352-kube-api-access-hds79\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.009020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.009036 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.009179 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.009507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.015524 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.024188 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hds79\" (UniqueName: \"kubernetes.io/projected/ee50fab4-f911-4a8c-991d-c8ec9a408352-kube-api-access-hds79\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.151121 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.699091 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5"] Mar 12 16:40:10 crc kubenswrapper[4687]: I0312 16:40:10.742946 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" event={"ID":"ee50fab4-f911-4a8c-991d-c8ec9a408352","Type":"ContainerStarted","Data":"58ae2b41fa3ae973233c3871facc0c175756b4bd0e16cce97165cfdd6cce27ac"} Mar 12 16:40:11 crc kubenswrapper[4687]: I0312 16:40:11.028510 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-49t6h"] Mar 12 16:40:11 crc kubenswrapper[4687]: I0312 16:40:11.037356 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-49t6h"] Mar 12 16:40:11 crc kubenswrapper[4687]: I0312 16:40:11.748109 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e192db3c-31bd-4855-bebd-d30764c224cf" path="/var/lib/kubelet/pods/e192db3c-31bd-4855-bebd-d30764c224cf/volumes" Mar 12 16:40:11 crc kubenswrapper[4687]: I0312 16:40:11.758941 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" event={"ID":"ee50fab4-f911-4a8c-991d-c8ec9a408352","Type":"ContainerStarted","Data":"e2872a25d6e4f671febf548ff294ccceeb37adb399f6e878ea35b1fb47f2c020"} Mar 12 16:40:11 crc kubenswrapper[4687]: I0312 16:40:11.786872 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" podStartSLOduration=2.164399199 podStartE2EDuration="2.786849927s" podCreationTimestamp="2026-03-12 16:40:09 +0000 UTC" firstStartedPulling="2026-03-12 16:40:10.704822212 +0000 UTC m=+2259.668784576" lastFinishedPulling="2026-03-12 16:40:11.32727297 +0000 UTC m=+2260.291235304" observedRunningTime="2026-03-12 16:40:11.78002826 +0000 UTC m=+2260.743990624" watchObservedRunningTime="2026-03-12 16:40:11.786849927 +0000 UTC m=+2260.750812281" Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.121334 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.121873 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.121914 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.122606 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.122658 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" gracePeriod=600 Mar 12 16:40:14 crc kubenswrapper[4687]: E0312 16:40:14.242879 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.807197 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" exitCode=0 Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.807266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a"} Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.807642 4687 scope.go:117] "RemoveContainer" containerID="d9dc6392cb7dbf829f54eade1ebdbf1980615c87a7f18a1a2fbd8d0c0a711640" Mar 12 16:40:14 crc kubenswrapper[4687]: I0312 16:40:14.808495 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:40:14 crc kubenswrapper[4687]: E0312 16:40:14.808823 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:40:26 crc kubenswrapper[4687]: I0312 16:40:26.734330 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:40:26 crc kubenswrapper[4687]: E0312 16:40:26.735445 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:40:33 crc kubenswrapper[4687]: I0312 16:40:33.143209 4687 scope.go:117] "RemoveContainer" containerID="74a0ace896b60d0c38999af8846eaf277b205593eea3f218ddc561c4703c5236" Mar 12 16:40:33 crc kubenswrapper[4687]: I0312 16:40:33.222384 4687 scope.go:117] "RemoveContainer" containerID="11254152a94aef76f5d06793656a243b91da8e91fbf720c7b7c5dc14c7a34e4e" Mar 12 16:40:37 crc kubenswrapper[4687]: I0312 16:40:37.734056 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:40:37 crc kubenswrapper[4687]: E0312 16:40:37.735739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:40:48 crc kubenswrapper[4687]: I0312 16:40:48.733071 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:40:48 crc kubenswrapper[4687]: E0312 16:40:48.733644 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:40:57 crc kubenswrapper[4687]: I0312 16:40:57.291928 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee50fab4-f911-4a8c-991d-c8ec9a408352" containerID="e2872a25d6e4f671febf548ff294ccceeb37adb399f6e878ea35b1fb47f2c020" exitCode=0 Mar 12 16:40:57 crc kubenswrapper[4687]: I0312 16:40:57.292459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" event={"ID":"ee50fab4-f911-4a8c-991d-c8ec9a408352","Type":"ContainerDied","Data":"e2872a25d6e4f671febf548ff294ccceeb37adb399f6e878ea35b1fb47f2c020"} Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.773617 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.936455 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hds79\" (UniqueName: \"kubernetes.io/projected/ee50fab4-f911-4a8c-991d-c8ec9a408352-kube-api-access-hds79\") pod \"ee50fab4-f911-4a8c-991d-c8ec9a408352\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.936501 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-inventory\") pod \"ee50fab4-f911-4a8c-991d-c8ec9a408352\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.936567 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-metadata-combined-ca-bundle\") pod \"ee50fab4-f911-4a8c-991d-c8ec9a408352\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.936603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-ssh-key-openstack-edpm-ipam\") pod \"ee50fab4-f911-4a8c-991d-c8ec9a408352\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.936640 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-nova-metadata-neutron-config-0\") pod \"ee50fab4-f911-4a8c-991d-c8ec9a408352\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.936756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ee50fab4-f911-4a8c-991d-c8ec9a408352\" (UID: \"ee50fab4-f911-4a8c-991d-c8ec9a408352\") " Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.943783 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ee50fab4-f911-4a8c-991d-c8ec9a408352" (UID: "ee50fab4-f911-4a8c-991d-c8ec9a408352"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:58 crc kubenswrapper[4687]: I0312 16:40:58.965625 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee50fab4-f911-4a8c-991d-c8ec9a408352-kube-api-access-hds79" (OuterVolumeSpecName: "kube-api-access-hds79") pod "ee50fab4-f911-4a8c-991d-c8ec9a408352" (UID: "ee50fab4-f911-4a8c-991d-c8ec9a408352"). InnerVolumeSpecName "kube-api-access-hds79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.026484 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee50fab4-f911-4a8c-991d-c8ec9a408352" (UID: "ee50fab4-f911-4a8c-991d-c8ec9a408352"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.026607 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ee50fab4-f911-4a8c-991d-c8ec9a408352" (UID: "ee50fab4-f911-4a8c-991d-c8ec9a408352"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.029553 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-inventory" (OuterVolumeSpecName: "inventory") pod "ee50fab4-f911-4a8c-991d-c8ec9a408352" (UID: "ee50fab4-f911-4a8c-991d-c8ec9a408352"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.035512 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ee50fab4-f911-4a8c-991d-c8ec9a408352" (UID: "ee50fab4-f911-4a8c-991d-c8ec9a408352"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.039655 4687 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.039693 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hds79\" (UniqueName: \"kubernetes.io/projected/ee50fab4-f911-4a8c-991d-c8ec9a408352-kube-api-access-hds79\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.039703 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.039714 4687 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.039725 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.039733 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ee50fab4-f911-4a8c-991d-c8ec9a408352-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.312557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" event={"ID":"ee50fab4-f911-4a8c-991d-c8ec9a408352","Type":"ContainerDied","Data":"58ae2b41fa3ae973233c3871facc0c175756b4bd0e16cce97165cfdd6cce27ac"} Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.312602 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ae2b41fa3ae973233c3871facc0c175756b4bd0e16cce97165cfdd6cce27ac" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.313113 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.428816 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr"] Mar 12 16:40:59 crc kubenswrapper[4687]: E0312 16:40:59.429281 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee50fab4-f911-4a8c-991d-c8ec9a408352" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.429304 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee50fab4-f911-4a8c-991d-c8ec9a408352" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.429646 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee50fab4-f911-4a8c-991d-c8ec9a408352" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.430608 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.432613 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.433325 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.433766 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.434069 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.434686 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.445906 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr"] Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.550877 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.550934 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.551033 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml55z\" (UniqueName: \"kubernetes.io/projected/092d1dd3-ae4d-4f56-81f7-105d449842f5-kube-api-access-ml55z\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.551079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.551121 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.653372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml55z\" (UniqueName: \"kubernetes.io/projected/092d1dd3-ae4d-4f56-81f7-105d449842f5-kube-api-access-ml55z\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.653446 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.653495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.653657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.653686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.657998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.658616 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.664189 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.664705 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.670458 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml55z\" (UniqueName: \"kubernetes.io/projected/092d1dd3-ae4d-4f56-81f7-105d449842f5-kube-api-access-ml55z\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m85zr\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:40:59 crc kubenswrapper[4687]: I0312 16:40:59.748997 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:41:00 crc kubenswrapper[4687]: I0312 16:41:00.350906 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:41:00 crc kubenswrapper[4687]: I0312 16:41:00.360800 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr"] Mar 12 16:41:01 crc kubenswrapper[4687]: I0312 16:41:01.336904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" event={"ID":"092d1dd3-ae4d-4f56-81f7-105d449842f5","Type":"ContainerStarted","Data":"5e6b38a52a787f1702f5df750096a7b069b62aad6f0ef89e507f3601ec6e8046"} Mar 12 16:41:01 crc kubenswrapper[4687]: I0312 16:41:01.337213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" event={"ID":"092d1dd3-ae4d-4f56-81f7-105d449842f5","Type":"ContainerStarted","Data":"4bbf6950cf12133b4184c97856934dd9d946b8ef065aa06ed72c075ce1086255"} Mar 12 16:41:03 crc kubenswrapper[4687]: I0312 16:41:03.742140 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:41:03 crc kubenswrapper[4687]: E0312 16:41:03.743021 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:41:16 crc kubenswrapper[4687]: I0312 16:41:16.733694 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:41:16 crc kubenswrapper[4687]: E0312 16:41:16.734629 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:41:27 crc kubenswrapper[4687]: I0312 16:41:27.733615 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:41:27 crc kubenswrapper[4687]: E0312 16:41:27.734304 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:41:40 crc kubenswrapper[4687]: I0312 16:41:40.734022 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:41:40 crc kubenswrapper[4687]: E0312 16:41:40.735023 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:41:54 crc kubenswrapper[4687]: I0312 16:41:54.733946 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:41:54 crc kubenswrapper[4687]: E0312 16:41:54.734697 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.153819 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" podStartSLOduration=60.674777918 podStartE2EDuration="1m1.153799927s" podCreationTimestamp="2026-03-12 16:40:59 +0000 UTC" firstStartedPulling="2026-03-12 16:41:00.350647883 +0000 UTC m=+2309.314610217" lastFinishedPulling="2026-03-12 16:41:00.829669882 +0000 UTC m=+2309.793632226" observedRunningTime="2026-03-12 16:41:01.357883291 +0000 UTC m=+2310.321845635" watchObservedRunningTime="2026-03-12 16:42:00.153799927 +0000 UTC m=+2369.117762281" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.157894 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555562-7d6lm"] Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.159889 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.162499 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.162647 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.162690 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.177665 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555562-7d6lm"] Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.243997 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbj7v\" (UniqueName: \"kubernetes.io/projected/ca450e45-1f11-4741-8038-97071a2a313d-kube-api-access-jbj7v\") pod \"auto-csr-approver-29555562-7d6lm\" (UID: \"ca450e45-1f11-4741-8038-97071a2a313d\") " pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.346576 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbj7v\" (UniqueName: \"kubernetes.io/projected/ca450e45-1f11-4741-8038-97071a2a313d-kube-api-access-jbj7v\") pod \"auto-csr-approver-29555562-7d6lm\" (UID: \"ca450e45-1f11-4741-8038-97071a2a313d\") " pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.372011 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbj7v\" (UniqueName: \"kubernetes.io/projected/ca450e45-1f11-4741-8038-97071a2a313d-kube-api-access-jbj7v\") pod \"auto-csr-approver-29555562-7d6lm\" (UID: \"ca450e45-1f11-4741-8038-97071a2a313d\") " pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.489895 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:00 crc kubenswrapper[4687]: I0312 16:42:00.974635 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555562-7d6lm"] Mar 12 16:42:01 crc kubenswrapper[4687]: I0312 16:42:01.030777 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" event={"ID":"ca450e45-1f11-4741-8038-97071a2a313d","Type":"ContainerStarted","Data":"343ce6455e67dd9f211298f8c7175e0fa5adf737f26d3222545c4573696fb804"} Mar 12 16:42:03 crc kubenswrapper[4687]: I0312 16:42:03.054110 4687 generic.go:334] "Generic (PLEG): container finished" podID="ca450e45-1f11-4741-8038-97071a2a313d" containerID="40e11b2146e12c519d8491b61bb4f86e4e112fbde6842df71864f1dfa124a862" exitCode=0 Mar 12 16:42:03 crc kubenswrapper[4687]: I0312 16:42:03.054579 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" event={"ID":"ca450e45-1f11-4741-8038-97071a2a313d","Type":"ContainerDied","Data":"40e11b2146e12c519d8491b61bb4f86e4e112fbde6842df71864f1dfa124a862"} Mar 12 16:42:04 crc kubenswrapper[4687]: I0312 16:42:04.461232 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:04 crc kubenswrapper[4687]: I0312 16:42:04.586635 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbj7v\" (UniqueName: \"kubernetes.io/projected/ca450e45-1f11-4741-8038-97071a2a313d-kube-api-access-jbj7v\") pod \"ca450e45-1f11-4741-8038-97071a2a313d\" (UID: \"ca450e45-1f11-4741-8038-97071a2a313d\") " Mar 12 16:42:04 crc kubenswrapper[4687]: I0312 16:42:04.592491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca450e45-1f11-4741-8038-97071a2a313d-kube-api-access-jbj7v" (OuterVolumeSpecName: "kube-api-access-jbj7v") pod "ca450e45-1f11-4741-8038-97071a2a313d" (UID: "ca450e45-1f11-4741-8038-97071a2a313d"). InnerVolumeSpecName "kube-api-access-jbj7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:42:04 crc kubenswrapper[4687]: I0312 16:42:04.690420 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbj7v\" (UniqueName: \"kubernetes.io/projected/ca450e45-1f11-4741-8038-97071a2a313d-kube-api-access-jbj7v\") on node \"crc\" DevicePath \"\"" Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.076162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" event={"ID":"ca450e45-1f11-4741-8038-97071a2a313d","Type":"ContainerDied","Data":"343ce6455e67dd9f211298f8c7175e0fa5adf737f26d3222545c4573696fb804"} Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.076203 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="343ce6455e67dd9f211298f8c7175e0fa5adf737f26d3222545c4573696fb804" Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.076252 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555562-7d6lm" Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.534245 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555556-vlwk9"] Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.546434 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555556-vlwk9"] Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.733588 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:42:05 crc kubenswrapper[4687]: E0312 16:42:05.733958 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:42:05 crc kubenswrapper[4687]: I0312 16:42:05.747844 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ff2b76-6e20-4908-81f5-7fa1c10a1b62" path="/var/lib/kubelet/pods/b8ff2b76-6e20-4908-81f5-7fa1c10a1b62/volumes" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.940076 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lfgmj"] Mar 12 16:42:11 crc kubenswrapper[4687]: E0312 16:42:11.941347 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca450e45-1f11-4741-8038-97071a2a313d" containerName="oc" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.941384 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca450e45-1f11-4741-8038-97071a2a313d" containerName="oc" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.941670 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca450e45-1f11-4741-8038-97071a2a313d" containerName="oc" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.943812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.955268 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfgmj"] Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.979454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-utilities\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.979636 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-catalog-content\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:11 crc kubenswrapper[4687]: I0312 16:42:11.979674 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf89m\" (UniqueName: \"kubernetes.io/projected/8f1d4e03-1659-418d-8f5f-6296c912c7bd-kube-api-access-lf89m\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.081748 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-catalog-content\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.081796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf89m\" (UniqueName: \"kubernetes.io/projected/8f1d4e03-1659-418d-8f5f-6296c912c7bd-kube-api-access-lf89m\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.081893 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-utilities\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.082317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-catalog-content\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.082453 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-utilities\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.104177 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf89m\" (UniqueName: \"kubernetes.io/projected/8f1d4e03-1659-418d-8f5f-6296c912c7bd-kube-api-access-lf89m\") pod \"certified-operators-lfgmj\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.276038 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:12 crc kubenswrapper[4687]: I0312 16:42:12.871572 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfgmj"] Mar 12 16:42:13 crc kubenswrapper[4687]: I0312 16:42:13.158353 4687 generic.go:334] "Generic (PLEG): container finished" podID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerID="946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd" exitCode=0 Mar 12 16:42:13 crc kubenswrapper[4687]: I0312 16:42:13.158469 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerDied","Data":"946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd"} Mar 12 16:42:13 crc kubenswrapper[4687]: I0312 16:42:13.158720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerStarted","Data":"9239a5ef2880d3c8035a79f4b44a4499b59c25faa3e028467bc4a026da864e7b"} Mar 12 16:42:14 crc kubenswrapper[4687]: I0312 16:42:14.177621 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerStarted","Data":"686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88"} Mar 12 16:42:16 crc kubenswrapper[4687]: I0312 16:42:16.207819 4687 generic.go:334] "Generic (PLEG): container finished" podID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerID="686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88" exitCode=0 Mar 12 16:42:16 crc kubenswrapper[4687]: I0312 16:42:16.208046 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerDied","Data":"686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88"} Mar 12 16:42:17 crc kubenswrapper[4687]: I0312 16:42:17.224324 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerStarted","Data":"6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d"} Mar 12 16:42:17 crc kubenswrapper[4687]: I0312 16:42:17.255760 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lfgmj" podStartSLOduration=2.75842805 podStartE2EDuration="6.255741748s" podCreationTimestamp="2026-03-12 16:42:11 +0000 UTC" firstStartedPulling="2026-03-12 16:42:13.160097391 +0000 UTC m=+2382.124059735" lastFinishedPulling="2026-03-12 16:42:16.657411079 +0000 UTC m=+2385.621373433" observedRunningTime="2026-03-12 16:42:17.24672109 +0000 UTC m=+2386.210683464" watchObservedRunningTime="2026-03-12 16:42:17.255741748 +0000 UTC m=+2386.219704092" Mar 12 16:42:17 crc kubenswrapper[4687]: I0312 16:42:17.733613 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:42:17 crc kubenswrapper[4687]: E0312 16:42:17.734350 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:42:22 crc kubenswrapper[4687]: I0312 16:42:22.277091 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:22 crc kubenswrapper[4687]: I0312 16:42:22.277636 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:22 crc kubenswrapper[4687]: I0312 16:42:22.378050 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:22 crc kubenswrapper[4687]: I0312 16:42:22.484640 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:22 crc kubenswrapper[4687]: I0312 16:42:22.646633 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfgmj"] Mar 12 16:42:24 crc kubenswrapper[4687]: I0312 16:42:24.345185 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lfgmj" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="registry-server" containerID="cri-o://6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d" gracePeriod=2 Mar 12 16:42:24 crc kubenswrapper[4687]: I0312 16:42:24.939095 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.127171 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-utilities\") pod \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.127253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf89m\" (UniqueName: \"kubernetes.io/projected/8f1d4e03-1659-418d-8f5f-6296c912c7bd-kube-api-access-lf89m\") pod \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.127348 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-catalog-content\") pod \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\" (UID: \"8f1d4e03-1659-418d-8f5f-6296c912c7bd\") " Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.129125 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-utilities" (OuterVolumeSpecName: "utilities") pod "8f1d4e03-1659-418d-8f5f-6296c912c7bd" (UID: "8f1d4e03-1659-418d-8f5f-6296c912c7bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.137093 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1d4e03-1659-418d-8f5f-6296c912c7bd-kube-api-access-lf89m" (OuterVolumeSpecName: "kube-api-access-lf89m") pod "8f1d4e03-1659-418d-8f5f-6296c912c7bd" (UID: "8f1d4e03-1659-418d-8f5f-6296c912c7bd"). InnerVolumeSpecName "kube-api-access-lf89m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.222486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f1d4e03-1659-418d-8f5f-6296c912c7bd" (UID: "8f1d4e03-1659-418d-8f5f-6296c912c7bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.231293 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.231352 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf89m\" (UniqueName: \"kubernetes.io/projected/8f1d4e03-1659-418d-8f5f-6296c912c7bd-kube-api-access-lf89m\") on node \"crc\" DevicePath \"\"" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.231401 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1d4e03-1659-418d-8f5f-6296c912c7bd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.364752 4687 generic.go:334] "Generic (PLEG): container finished" podID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerID="6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d" exitCode=0 Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.364839 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfgmj" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.364865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerDied","Data":"6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d"} Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.365716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfgmj" event={"ID":"8f1d4e03-1659-418d-8f5f-6296c912c7bd","Type":"ContainerDied","Data":"9239a5ef2880d3c8035a79f4b44a4499b59c25faa3e028467bc4a026da864e7b"} Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.365746 4687 scope.go:117] "RemoveContainer" containerID="6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.413500 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfgmj"] Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.415950 4687 scope.go:117] "RemoveContainer" containerID="686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.426860 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lfgmj"] Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.477090 4687 scope.go:117] "RemoveContainer" containerID="946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.503041 4687 scope.go:117] "RemoveContainer" containerID="6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d" Mar 12 16:42:25 crc kubenswrapper[4687]: E0312 16:42:25.503588 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d\": container with ID starting with 6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d not found: ID does not exist" containerID="6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.503675 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d"} err="failed to get container status \"6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d\": rpc error: code = NotFound desc = could not find container \"6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d\": container with ID starting with 6e7f5a33a9aeaf3e1ba03715bfd4de9944c3f42898ca1398e1bc2846c9f7450d not found: ID does not exist" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.503752 4687 scope.go:117] "RemoveContainer" containerID="686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88" Mar 12 16:42:25 crc kubenswrapper[4687]: E0312 16:42:25.504142 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88\": container with ID starting with 686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88 not found: ID does not exist" containerID="686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.504222 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88"} err="failed to get container status \"686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88\": rpc error: code = NotFound desc = could not find container \"686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88\": container with ID starting with 686a476d12604cfc39a4b3747771882c1dd67fdd14d5cf5340669c78e257ea88 not found: ID does not exist" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.504284 4687 scope.go:117] "RemoveContainer" containerID="946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd" Mar 12 16:42:25 crc kubenswrapper[4687]: E0312 16:42:25.504667 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd\": container with ID starting with 946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd not found: ID does not exist" containerID="946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.504712 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd"} err="failed to get container status \"946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd\": rpc error: code = NotFound desc = could not find container \"946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd\": container with ID starting with 946568aacab8cdba9f3ed3e4b41e6331f4196ebff0560468118f15a2604bd8dd not found: ID does not exist" Mar 12 16:42:25 crc kubenswrapper[4687]: I0312 16:42:25.752545 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" path="/var/lib/kubelet/pods/8f1d4e03-1659-418d-8f5f-6296c912c7bd/volumes" Mar 12 16:42:29 crc kubenswrapper[4687]: I0312 16:42:29.743627 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:42:29 crc kubenswrapper[4687]: E0312 16:42:29.744958 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:42:33 crc kubenswrapper[4687]: I0312 16:42:33.373621 4687 scope.go:117] "RemoveContainer" containerID="4ae82c930f5c0da069f67d382772ec4743ccccccfcd6578d7778634022986ac4" Mar 12 16:42:44 crc kubenswrapper[4687]: I0312 16:42:44.732598 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:42:44 crc kubenswrapper[4687]: E0312 16:42:44.733406 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.045573 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-79nqq"] Mar 12 16:42:51 crc kubenswrapper[4687]: E0312 16:42:51.046698 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="registry-server" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.046714 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="registry-server" Mar 12 16:42:51 crc kubenswrapper[4687]: E0312 16:42:51.046754 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="extract-utilities" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.046763 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="extract-utilities" Mar 12 16:42:51 crc kubenswrapper[4687]: E0312 16:42:51.046787 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="extract-content" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.046796 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="extract-content" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.047122 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1d4e03-1659-418d-8f5f-6296c912c7bd" containerName="registry-server" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.049325 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.063158 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79nqq"] Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.152877 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5rj\" (UniqueName: \"kubernetes.io/projected/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-kube-api-access-9d5rj\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.152974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-utilities\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.153000 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-catalog-content\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.255553 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5rj\" (UniqueName: \"kubernetes.io/projected/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-kube-api-access-9d5rj\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.255934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-utilities\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.255961 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-catalog-content\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.256331 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-utilities\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.256443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-catalog-content\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.278413 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5rj\" (UniqueName: \"kubernetes.io/projected/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-kube-api-access-9d5rj\") pod \"redhat-marketplace-79nqq\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.372741 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:42:51 crc kubenswrapper[4687]: W0312 16:42:51.882990 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b3d4e5_4a08_4828_b9e7_2a437dc09cb6.slice/crio-5fd453a08e7f1150a75ecae3a65b97b3a085e3cb10c2a0297dfac691e67c07c2 WatchSource:0}: Error finding container 5fd453a08e7f1150a75ecae3a65b97b3a085e3cb10c2a0297dfac691e67c07c2: Status 404 returned error can't find the container with id 5fd453a08e7f1150a75ecae3a65b97b3a085e3cb10c2a0297dfac691e67c07c2 Mar 12 16:42:51 crc kubenswrapper[4687]: I0312 16:42:51.885068 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79nqq"] Mar 12 16:42:52 crc kubenswrapper[4687]: I0312 16:42:52.678678 4687 generic.go:334] "Generic (PLEG): container finished" podID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerID="4a3b2416cdf144bea347dca9688c71c6671ad10a1937fa2e35e396b19d8f944f" exitCode=0 Mar 12 16:42:52 crc kubenswrapper[4687]: I0312 16:42:52.678728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerDied","Data":"4a3b2416cdf144bea347dca9688c71c6671ad10a1937fa2e35e396b19d8f944f"} Mar 12 16:42:52 crc kubenswrapper[4687]: I0312 16:42:52.678990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerStarted","Data":"5fd453a08e7f1150a75ecae3a65b97b3a085e3cb10c2a0297dfac691e67c07c2"} Mar 12 16:42:53 crc kubenswrapper[4687]: I0312 16:42:53.694096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerStarted","Data":"19fd986551a22726d75cda3e83f21ecf11aafd2d9d5117efc0d17b66343ddae7"} Mar 12 16:42:54 crc kubenswrapper[4687]: I0312 16:42:54.709474 4687 generic.go:334] "Generic (PLEG): container finished" podID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerID="19fd986551a22726d75cda3e83f21ecf11aafd2d9d5117efc0d17b66343ddae7" exitCode=0 Mar 12 16:42:54 crc kubenswrapper[4687]: I0312 16:42:54.709529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerDied","Data":"19fd986551a22726d75cda3e83f21ecf11aafd2d9d5117efc0d17b66343ddae7"} Mar 12 16:42:55 crc kubenswrapper[4687]: I0312 16:42:55.720999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerStarted","Data":"bb5b631421b369f85e9080f08fbd387eeea33b54785e7e5427600c1b52cfe6b1"} Mar 12 16:42:55 crc kubenswrapper[4687]: I0312 16:42:55.737745 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:42:55 crc kubenswrapper[4687]: E0312 16:42:55.738089 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:42:55 crc kubenswrapper[4687]: I0312 16:42:55.762401 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-79nqq" podStartSLOduration=2.2057888 podStartE2EDuration="4.762346213s" podCreationTimestamp="2026-03-12 16:42:51 +0000 UTC" firstStartedPulling="2026-03-12 16:42:52.681271734 +0000 UTC m=+2421.645234118" lastFinishedPulling="2026-03-12 16:42:55.237829177 +0000 UTC m=+2424.201791531" observedRunningTime="2026-03-12 16:42:55.742913661 +0000 UTC m=+2424.706876035" watchObservedRunningTime="2026-03-12 16:42:55.762346213 +0000 UTC m=+2424.726308577" Mar 12 16:43:01 crc kubenswrapper[4687]: I0312 16:43:01.374222 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:43:01 crc kubenswrapper[4687]: I0312 16:43:01.374932 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:43:01 crc kubenswrapper[4687]: I0312 16:43:01.425103 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:43:01 crc kubenswrapper[4687]: I0312 16:43:01.884265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:43:01 crc kubenswrapper[4687]: I0312 16:43:01.972074 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79nqq"] Mar 12 16:43:03 crc kubenswrapper[4687]: I0312 16:43:03.818164 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-79nqq" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="registry-server" containerID="cri-o://bb5b631421b369f85e9080f08fbd387eeea33b54785e7e5427600c1b52cfe6b1" gracePeriod=2 Mar 12 16:43:04 crc kubenswrapper[4687]: I0312 16:43:04.836972 4687 generic.go:334] "Generic (PLEG): container finished" podID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerID="bb5b631421b369f85e9080f08fbd387eeea33b54785e7e5427600c1b52cfe6b1" exitCode=0 Mar 12 16:43:04 crc kubenswrapper[4687]: I0312 16:43:04.837127 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerDied","Data":"bb5b631421b369f85e9080f08fbd387eeea33b54785e7e5427600c1b52cfe6b1"} Mar 12 16:43:04 crc kubenswrapper[4687]: I0312 16:43:04.837238 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79nqq" event={"ID":"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6","Type":"ContainerDied","Data":"5fd453a08e7f1150a75ecae3a65b97b3a085e3cb10c2a0297dfac691e67c07c2"} Mar 12 16:43:04 crc kubenswrapper[4687]: I0312 16:43:04.837254 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fd453a08e7f1150a75ecae3a65b97b3a085e3cb10c2a0297dfac691e67c07c2" Mar 12 16:43:04 crc kubenswrapper[4687]: I0312 16:43:04.868722 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.037638 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-utilities\") pod \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.038192 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5rj\" (UniqueName: \"kubernetes.io/projected/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-kube-api-access-9d5rj\") pod \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.038229 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-catalog-content\") pod \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\" (UID: \"24b3d4e5-4a08-4828-b9e7-2a437dc09cb6\") " Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.038639 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-utilities" (OuterVolumeSpecName: "utilities") pod "24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" (UID: "24b3d4e5-4a08-4828-b9e7-2a437dc09cb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.039056 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.043259 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-kube-api-access-9d5rj" (OuterVolumeSpecName: "kube-api-access-9d5rj") pod "24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" (UID: "24b3d4e5-4a08-4828-b9e7-2a437dc09cb6"). InnerVolumeSpecName "kube-api-access-9d5rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.141294 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d5rj\" (UniqueName: \"kubernetes.io/projected/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-kube-api-access-9d5rj\") on node \"crc\" DevicePath \"\"" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.407519 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" (UID: "24b3d4e5-4a08-4828-b9e7-2a437dc09cb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.448252 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.846396 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79nqq" Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.876473 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79nqq"] Mar 12 16:43:05 crc kubenswrapper[4687]: I0312 16:43:05.888844 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-79nqq"] Mar 12 16:43:07 crc kubenswrapper[4687]: I0312 16:43:07.745204 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" path="/var/lib/kubelet/pods/24b3d4e5-4a08-4828-b9e7-2a437dc09cb6/volumes" Mar 12 16:43:09 crc kubenswrapper[4687]: I0312 16:43:09.733729 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:43:09 crc kubenswrapper[4687]: E0312 16:43:09.734325 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:43:21 crc kubenswrapper[4687]: I0312 16:43:21.753098 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:43:21 crc kubenswrapper[4687]: E0312 16:43:21.754787 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:43:37 crc kubenswrapper[4687]: I0312 16:43:37.732923 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:43:37 crc kubenswrapper[4687]: E0312 16:43:37.733629 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:43:51 crc kubenswrapper[4687]: I0312 16:43:51.742232 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:43:51 crc kubenswrapper[4687]: E0312 16:43:51.745198 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.160833 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555564-sbwcx"] Mar 12 16:44:00 crc kubenswrapper[4687]: E0312 16:44:00.162005 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="extract-utilities" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.162026 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="extract-utilities" Mar 12 16:44:00 crc kubenswrapper[4687]: E0312 16:44:00.162054 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="registry-server" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.162063 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="registry-server" Mar 12 16:44:00 crc kubenswrapper[4687]: E0312 16:44:00.162078 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="extract-content" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.162089 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="extract-content" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.162403 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b3d4e5-4a08-4828-b9e7-2a437dc09cb6" containerName="registry-server" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.164469 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.171816 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.172042 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.172151 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.174741 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555564-sbwcx"] Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.263318 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wsv\" (UniqueName: \"kubernetes.io/projected/14dae95f-17a3-4f7f-96f4-e07c1713c61c-kube-api-access-c2wsv\") pod \"auto-csr-approver-29555564-sbwcx\" (UID: \"14dae95f-17a3-4f7f-96f4-e07c1713c61c\") " pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.366881 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wsv\" (UniqueName: \"kubernetes.io/projected/14dae95f-17a3-4f7f-96f4-e07c1713c61c-kube-api-access-c2wsv\") pod \"auto-csr-approver-29555564-sbwcx\" (UID: \"14dae95f-17a3-4f7f-96f4-e07c1713c61c\") " pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.388164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wsv\" (UniqueName: \"kubernetes.io/projected/14dae95f-17a3-4f7f-96f4-e07c1713c61c-kube-api-access-c2wsv\") pod \"auto-csr-approver-29555564-sbwcx\" (UID: \"14dae95f-17a3-4f7f-96f4-e07c1713c61c\") " pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.490904 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:00 crc kubenswrapper[4687]: I0312 16:44:00.977410 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555564-sbwcx"] Mar 12 16:44:01 crc kubenswrapper[4687]: I0312 16:44:01.522809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" event={"ID":"14dae95f-17a3-4f7f-96f4-e07c1713c61c","Type":"ContainerStarted","Data":"13280ac431dd9c8173337ba670d397ca16836c5e25a576161ec3cd6f769d9819"} Mar 12 16:44:02 crc kubenswrapper[4687]: I0312 16:44:02.535987 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" event={"ID":"14dae95f-17a3-4f7f-96f4-e07c1713c61c","Type":"ContainerStarted","Data":"c6bff18185ea0de94ca3908b722fe363155c60784879d6a0cc5d3def39c1a13c"} Mar 12 16:44:02 crc kubenswrapper[4687]: I0312 16:44:02.555879 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" podStartSLOduration=1.486646802 podStartE2EDuration="2.555856217s" podCreationTimestamp="2026-03-12 16:44:00 +0000 UTC" firstStartedPulling="2026-03-12 16:44:00.982926076 +0000 UTC m=+2489.946888420" lastFinishedPulling="2026-03-12 16:44:02.052135471 +0000 UTC m=+2491.016097835" observedRunningTime="2026-03-12 16:44:02.555658312 +0000 UTC m=+2491.519620716" watchObservedRunningTime="2026-03-12 16:44:02.555856217 +0000 UTC m=+2491.519818571" Mar 12 16:44:03 crc kubenswrapper[4687]: I0312 16:44:03.549376 4687 generic.go:334] "Generic (PLEG): container finished" podID="14dae95f-17a3-4f7f-96f4-e07c1713c61c" containerID="c6bff18185ea0de94ca3908b722fe363155c60784879d6a0cc5d3def39c1a13c" exitCode=0 Mar 12 16:44:03 crc kubenswrapper[4687]: I0312 16:44:03.549435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" event={"ID":"14dae95f-17a3-4f7f-96f4-e07c1713c61c","Type":"ContainerDied","Data":"c6bff18185ea0de94ca3908b722fe363155c60784879d6a0cc5d3def39c1a13c"} Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:04.999855 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.086619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2wsv\" (UniqueName: \"kubernetes.io/projected/14dae95f-17a3-4f7f-96f4-e07c1713c61c-kube-api-access-c2wsv\") pod \"14dae95f-17a3-4f7f-96f4-e07c1713c61c\" (UID: \"14dae95f-17a3-4f7f-96f4-e07c1713c61c\") " Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.091343 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14dae95f-17a3-4f7f-96f4-e07c1713c61c-kube-api-access-c2wsv" (OuterVolumeSpecName: "kube-api-access-c2wsv") pod "14dae95f-17a3-4f7f-96f4-e07c1713c61c" (UID: "14dae95f-17a3-4f7f-96f4-e07c1713c61c"). InnerVolumeSpecName "kube-api-access-c2wsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.190047 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2wsv\" (UniqueName: \"kubernetes.io/projected/14dae95f-17a3-4f7f-96f4-e07c1713c61c-kube-api-access-c2wsv\") on node \"crc\" DevicePath \"\"" Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.582960 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.582817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555564-sbwcx" event={"ID":"14dae95f-17a3-4f7f-96f4-e07c1713c61c","Type":"ContainerDied","Data":"13280ac431dd9c8173337ba670d397ca16836c5e25a576161ec3cd6f769d9819"} Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.583565 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13280ac431dd9c8173337ba670d397ca16836c5e25a576161ec3cd6f769d9819" Mar 12 16:44:05 crc kubenswrapper[4687]: I0312 16:44:05.734794 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:44:05 crc kubenswrapper[4687]: E0312 16:44:05.735211 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:44:06 crc kubenswrapper[4687]: I0312 16:44:06.077109 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555558-796tg"] Mar 12 16:44:06 crc kubenswrapper[4687]: I0312 16:44:06.087873 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555558-796tg"] Mar 12 16:44:07 crc kubenswrapper[4687]: I0312 16:44:07.750063 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed5ac57-3c74-4f30-a1d2-90ad98c470fa" path="/var/lib/kubelet/pods/4ed5ac57-3c74-4f30-a1d2-90ad98c470fa/volumes" Mar 12 16:44:17 crc kubenswrapper[4687]: I0312 16:44:17.733203 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:44:17 crc kubenswrapper[4687]: E0312 16:44:17.734065 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:44:31 crc kubenswrapper[4687]: I0312 16:44:31.740579 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:44:31 crc kubenswrapper[4687]: E0312 16:44:31.741294 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:44:33 crc kubenswrapper[4687]: I0312 16:44:33.551274 4687 scope.go:117] "RemoveContainer" containerID="248274be69d61ddf968173d9e2c6045dd651bf6c0480731415b33f2d8d04aee9" Mar 12 16:44:41 crc kubenswrapper[4687]: I0312 16:44:41.990180 4687 generic.go:334] "Generic (PLEG): container finished" podID="092d1dd3-ae4d-4f56-81f7-105d449842f5" containerID="5e6b38a52a787f1702f5df750096a7b069b62aad6f0ef89e507f3601ec6e8046" exitCode=0 Mar 12 16:44:41 crc kubenswrapper[4687]: I0312 16:44:41.990266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" event={"ID":"092d1dd3-ae4d-4f56-81f7-105d449842f5","Type":"ContainerDied","Data":"5e6b38a52a787f1702f5df750096a7b069b62aad6f0ef89e507f3601ec6e8046"} Mar 12 16:44:42 crc kubenswrapper[4687]: I0312 16:44:42.733833 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:44:42 crc kubenswrapper[4687]: E0312 16:44:42.734388 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.593904 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.695082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-combined-ca-bundle\") pod \"092d1dd3-ae4d-4f56-81f7-105d449842f5\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.695221 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-secret-0\") pod \"092d1dd3-ae4d-4f56-81f7-105d449842f5\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.695468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-inventory\") pod \"092d1dd3-ae4d-4f56-81f7-105d449842f5\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.695514 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-ssh-key-openstack-edpm-ipam\") pod \"092d1dd3-ae4d-4f56-81f7-105d449842f5\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.695576 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml55z\" (UniqueName: \"kubernetes.io/projected/092d1dd3-ae4d-4f56-81f7-105d449842f5-kube-api-access-ml55z\") pod \"092d1dd3-ae4d-4f56-81f7-105d449842f5\" (UID: \"092d1dd3-ae4d-4f56-81f7-105d449842f5\") " Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.700935 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "092d1dd3-ae4d-4f56-81f7-105d449842f5" (UID: "092d1dd3-ae4d-4f56-81f7-105d449842f5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.701662 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092d1dd3-ae4d-4f56-81f7-105d449842f5-kube-api-access-ml55z" (OuterVolumeSpecName: "kube-api-access-ml55z") pod "092d1dd3-ae4d-4f56-81f7-105d449842f5" (UID: "092d1dd3-ae4d-4f56-81f7-105d449842f5"). InnerVolumeSpecName "kube-api-access-ml55z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.726348 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-inventory" (OuterVolumeSpecName: "inventory") pod "092d1dd3-ae4d-4f56-81f7-105d449842f5" (UID: "092d1dd3-ae4d-4f56-81f7-105d449842f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.727476 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "092d1dd3-ae4d-4f56-81f7-105d449842f5" (UID: "092d1dd3-ae4d-4f56-81f7-105d449842f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.741525 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "092d1dd3-ae4d-4f56-81f7-105d449842f5" (UID: "092d1dd3-ae4d-4f56-81f7-105d449842f5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.798690 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.798723 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.798736 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml55z\" (UniqueName: \"kubernetes.io/projected/092d1dd3-ae4d-4f56-81f7-105d449842f5-kube-api-access-ml55z\") on node \"crc\" DevicePath \"\"" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.798748 4687 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:44:43 crc kubenswrapper[4687]: I0312 16:44:43.798756 4687 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/092d1dd3-ae4d-4f56-81f7-105d449842f5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.019495 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" event={"ID":"092d1dd3-ae4d-4f56-81f7-105d449842f5","Type":"ContainerDied","Data":"4bbf6950cf12133b4184c97856934dd9d946b8ef065aa06ed72c075ce1086255"} Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.019557 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bbf6950cf12133b4184c97856934dd9d946b8ef065aa06ed72c075ce1086255" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.019568 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m85zr" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.127190 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t"] Mar 12 16:44:44 crc kubenswrapper[4687]: E0312 16:44:44.128003 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092d1dd3-ae4d-4f56-81f7-105d449842f5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.128025 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="092d1dd3-ae4d-4f56-81f7-105d449842f5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 16:44:44 crc kubenswrapper[4687]: E0312 16:44:44.128062 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dae95f-17a3-4f7f-96f4-e07c1713c61c" containerName="oc" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.128069 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dae95f-17a3-4f7f-96f4-e07c1713c61c" containerName="oc" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.128316 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dae95f-17a3-4f7f-96f4-e07c1713c61c" containerName="oc" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.128346 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="092d1dd3-ae4d-4f56-81f7-105d449842f5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.129276 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134051 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134232 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134307 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134128 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134202 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134412 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.134261 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.138955 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t"] Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213099 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213197 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213523 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fwn\" (UniqueName: \"kubernetes.io/projected/f4e8ecd3-b38d-4144-9662-098445ab656b-kube-api-access-l9fwn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213640 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213794 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213838 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.213901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.214016 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.315992 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316144 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316207 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316257 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316545 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316600 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9fwn\" (UniqueName: \"kubernetes.io/projected/f4e8ecd3-b38d-4144-9662-098445ab656b-kube-api-access-l9fwn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316795 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.316921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.321304 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.321461 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.321776 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.323841 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.325095 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.326097 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.326800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.327049 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.328532 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.342065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9fwn\" (UniqueName: \"kubernetes.io/projected/f4e8ecd3-b38d-4144-9662-098445ab656b-kube-api-access-l9fwn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b762t\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.449483 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:44:44 crc kubenswrapper[4687]: I0312 16:44:44.998812 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t"] Mar 12 16:44:45 crc kubenswrapper[4687]: I0312 16:44:45.046837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" event={"ID":"f4e8ecd3-b38d-4144-9662-098445ab656b","Type":"ContainerStarted","Data":"e4ddd81104d1453430689dd6f371532cef94f0b0838fc4e6f58fbd2e410be42a"} Mar 12 16:44:46 crc kubenswrapper[4687]: I0312 16:44:46.059152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" event={"ID":"f4e8ecd3-b38d-4144-9662-098445ab656b","Type":"ContainerStarted","Data":"a6c9620a60a6f30a52404454d2a8b6ab8546d3135601cd8b9f6bab106ae7c860"} Mar 12 16:44:46 crc kubenswrapper[4687]: I0312 16:44:46.089182 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" podStartSLOduration=1.394508586 podStartE2EDuration="2.089159182s" podCreationTimestamp="2026-03-12 16:44:44 +0000 UTC" firstStartedPulling="2026-03-12 16:44:45.00764323 +0000 UTC m=+2533.971605574" lastFinishedPulling="2026-03-12 16:44:45.702293806 +0000 UTC m=+2534.666256170" observedRunningTime="2026-03-12 16:44:46.079488497 +0000 UTC m=+2535.043450871" watchObservedRunningTime="2026-03-12 16:44:46.089159182 +0000 UTC m=+2535.053121526" Mar 12 16:44:56 crc kubenswrapper[4687]: I0312 16:44:56.734781 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:44:56 crc kubenswrapper[4687]: E0312 16:44:56.735838 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.139560 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j"] Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.142009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.143794 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.146772 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.154870 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j"] Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.155589 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a56ae63-1736-499e-b424-a3a0f8936561-config-volume\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.155684 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a56ae63-1736-499e-b424-a3a0f8936561-secret-volume\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.155799 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbj9f\" (UniqueName: \"kubernetes.io/projected/5a56ae63-1736-499e-b424-a3a0f8936561-kube-api-access-cbj9f\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.258204 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbj9f\" (UniqueName: \"kubernetes.io/projected/5a56ae63-1736-499e-b424-a3a0f8936561-kube-api-access-cbj9f\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.258839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a56ae63-1736-499e-b424-a3a0f8936561-config-volume\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.258905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a56ae63-1736-499e-b424-a3a0f8936561-secret-volume\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.259899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a56ae63-1736-499e-b424-a3a0f8936561-config-volume\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.267651 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a56ae63-1736-499e-b424-a3a0f8936561-secret-volume\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.274191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbj9f\" (UniqueName: \"kubernetes.io/projected/5a56ae63-1736-499e-b424-a3a0f8936561-kube-api-access-cbj9f\") pod \"collect-profiles-29555565-ztn4j\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.505932 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:00 crc kubenswrapper[4687]: I0312 16:45:00.966530 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j"] Mar 12 16:45:01 crc kubenswrapper[4687]: I0312 16:45:01.219048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" event={"ID":"5a56ae63-1736-499e-b424-a3a0f8936561","Type":"ContainerStarted","Data":"74c6b95ca46e5f1b91aa5d5ddaa8e11e8ac2690a9b1cdd4aa40768fa7889a90a"} Mar 12 16:45:01 crc kubenswrapper[4687]: I0312 16:45:01.219335 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" event={"ID":"5a56ae63-1736-499e-b424-a3a0f8936561","Type":"ContainerStarted","Data":"28d816367f6d06bffcfe49e72ec772c5d6da7640d82aaff0e5a20bb6cdd556ac"} Mar 12 16:45:01 crc kubenswrapper[4687]: I0312 16:45:01.238835 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" podStartSLOduration=1.238816 podStartE2EDuration="1.238816s" podCreationTimestamp="2026-03-12 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:45:01.23112511 +0000 UTC m=+2550.195087454" watchObservedRunningTime="2026-03-12 16:45:01.238816 +0000 UTC m=+2550.202778344" Mar 12 16:45:02 crc kubenswrapper[4687]: I0312 16:45:02.234190 4687 generic.go:334] "Generic (PLEG): container finished" podID="5a56ae63-1736-499e-b424-a3a0f8936561" containerID="74c6b95ca46e5f1b91aa5d5ddaa8e11e8ac2690a9b1cdd4aa40768fa7889a90a" exitCode=0 Mar 12 16:45:02 crc kubenswrapper[4687]: I0312 16:45:02.234300 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" event={"ID":"5a56ae63-1736-499e-b424-a3a0f8936561","Type":"ContainerDied","Data":"74c6b95ca46e5f1b91aa5d5ddaa8e11e8ac2690a9b1cdd4aa40768fa7889a90a"} Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.694564 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.751744 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a56ae63-1736-499e-b424-a3a0f8936561-config-volume\") pod \"5a56ae63-1736-499e-b424-a3a0f8936561\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.754662 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a56ae63-1736-499e-b424-a3a0f8936561-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a56ae63-1736-499e-b424-a3a0f8936561" (UID: "5a56ae63-1736-499e-b424-a3a0f8936561"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.754856 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbj9f\" (UniqueName: \"kubernetes.io/projected/5a56ae63-1736-499e-b424-a3a0f8936561-kube-api-access-cbj9f\") pod \"5a56ae63-1736-499e-b424-a3a0f8936561\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.755992 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a56ae63-1736-499e-b424-a3a0f8936561-secret-volume\") pod \"5a56ae63-1736-499e-b424-a3a0f8936561\" (UID: \"5a56ae63-1736-499e-b424-a3a0f8936561\") " Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.757227 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a56ae63-1736-499e-b424-a3a0f8936561-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.761755 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a56ae63-1736-499e-b424-a3a0f8936561-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a56ae63-1736-499e-b424-a3a0f8936561" (UID: "5a56ae63-1736-499e-b424-a3a0f8936561"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.762183 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a56ae63-1736-499e-b424-a3a0f8936561-kube-api-access-cbj9f" (OuterVolumeSpecName: "kube-api-access-cbj9f") pod "5a56ae63-1736-499e-b424-a3a0f8936561" (UID: "5a56ae63-1736-499e-b424-a3a0f8936561"). InnerVolumeSpecName "kube-api-access-cbj9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.859457 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a56ae63-1736-499e-b424-a3a0f8936561-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 16:45:03 crc kubenswrapper[4687]: I0312 16:45:03.859510 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbj9f\" (UniqueName: \"kubernetes.io/projected/5a56ae63-1736-499e-b424-a3a0f8936561-kube-api-access-cbj9f\") on node \"crc\" DevicePath \"\"" Mar 12 16:45:04 crc kubenswrapper[4687]: I0312 16:45:04.254562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" event={"ID":"5a56ae63-1736-499e-b424-a3a0f8936561","Type":"ContainerDied","Data":"28d816367f6d06bffcfe49e72ec772c5d6da7640d82aaff0e5a20bb6cdd556ac"} Mar 12 16:45:04 crc kubenswrapper[4687]: I0312 16:45:04.254615 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d816367f6d06bffcfe49e72ec772c5d6da7640d82aaff0e5a20bb6cdd556ac" Mar 12 16:45:04 crc kubenswrapper[4687]: I0312 16:45:04.254612 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j" Mar 12 16:45:04 crc kubenswrapper[4687]: I0312 16:45:04.336449 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2"] Mar 12 16:45:04 crc kubenswrapper[4687]: I0312 16:45:04.349519 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555520-fxcc2"] Mar 12 16:45:05 crc kubenswrapper[4687]: I0312 16:45:05.745949 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2" path="/var/lib/kubelet/pods/0d8c816a-2bd6-48a0-8f5e-20aaf20f4ae2/volumes" Mar 12 16:45:11 crc kubenswrapper[4687]: I0312 16:45:11.751041 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:45:11 crc kubenswrapper[4687]: E0312 16:45:11.752155 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:45:23 crc kubenswrapper[4687]: I0312 16:45:23.733723 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:45:24 crc kubenswrapper[4687]: I0312 16:45:24.533053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"46abef41881c9a6f28882d6a10c8f0678dc2602721fd1e31fad6112aecd70e2a"} Mar 12 16:45:33 crc kubenswrapper[4687]: I0312 16:45:33.635555 4687 scope.go:117] "RemoveContainer" containerID="bae00bf8f9e71811367ef40e09e7125f32e3a8edb60034c88b2e66c26373fa7c" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.138725 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555566-s4gkh"] Mar 12 16:46:00 crc kubenswrapper[4687]: E0312 16:46:00.139831 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a56ae63-1736-499e-b424-a3a0f8936561" containerName="collect-profiles" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.139850 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a56ae63-1736-499e-b424-a3a0f8936561" containerName="collect-profiles" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.140210 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a56ae63-1736-499e-b424-a3a0f8936561" containerName="collect-profiles" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.141117 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.144621 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.144805 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.145045 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.148815 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555566-s4gkh"] Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.257018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqt6\" (UniqueName: \"kubernetes.io/projected/c8b163ef-54ed-47b4-8284-93ffc647b4ef-kube-api-access-sbqt6\") pod \"auto-csr-approver-29555566-s4gkh\" (UID: \"c8b163ef-54ed-47b4-8284-93ffc647b4ef\") " pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.359948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqt6\" (UniqueName: \"kubernetes.io/projected/c8b163ef-54ed-47b4-8284-93ffc647b4ef-kube-api-access-sbqt6\") pod \"auto-csr-approver-29555566-s4gkh\" (UID: \"c8b163ef-54ed-47b4-8284-93ffc647b4ef\") " pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.384833 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqt6\" (UniqueName: \"kubernetes.io/projected/c8b163ef-54ed-47b4-8284-93ffc647b4ef-kube-api-access-sbqt6\") pod \"auto-csr-approver-29555566-s4gkh\" (UID: \"c8b163ef-54ed-47b4-8284-93ffc647b4ef\") " pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.491126 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.996841 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555566-s4gkh"] Mar 12 16:46:00 crc kubenswrapper[4687]: I0312 16:46:00.997399 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:46:01 crc kubenswrapper[4687]: I0312 16:46:01.956638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" event={"ID":"c8b163ef-54ed-47b4-8284-93ffc647b4ef","Type":"ContainerStarted","Data":"c47ac970be6d35aff30189f304797c36f7b95e0af2c334ab472e8041cca0fdb2"} Mar 12 16:46:02 crc kubenswrapper[4687]: I0312 16:46:02.968680 4687 generic.go:334] "Generic (PLEG): container finished" podID="c8b163ef-54ed-47b4-8284-93ffc647b4ef" containerID="ac3a666a1c8ef15974d032fd72be1493e926810603be3ec69e85b953e004b986" exitCode=0 Mar 12 16:46:02 crc kubenswrapper[4687]: I0312 16:46:02.969007 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" event={"ID":"c8b163ef-54ed-47b4-8284-93ffc647b4ef","Type":"ContainerDied","Data":"ac3a666a1c8ef15974d032fd72be1493e926810603be3ec69e85b953e004b986"} Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.359569 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.498955 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbqt6\" (UniqueName: \"kubernetes.io/projected/c8b163ef-54ed-47b4-8284-93ffc647b4ef-kube-api-access-sbqt6\") pod \"c8b163ef-54ed-47b4-8284-93ffc647b4ef\" (UID: \"c8b163ef-54ed-47b4-8284-93ffc647b4ef\") " Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.505562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b163ef-54ed-47b4-8284-93ffc647b4ef-kube-api-access-sbqt6" (OuterVolumeSpecName: "kube-api-access-sbqt6") pod "c8b163ef-54ed-47b4-8284-93ffc647b4ef" (UID: "c8b163ef-54ed-47b4-8284-93ffc647b4ef"). InnerVolumeSpecName "kube-api-access-sbqt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.603170 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbqt6\" (UniqueName: \"kubernetes.io/projected/c8b163ef-54ed-47b4-8284-93ffc647b4ef-kube-api-access-sbqt6\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.989381 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" event={"ID":"c8b163ef-54ed-47b4-8284-93ffc647b4ef","Type":"ContainerDied","Data":"c47ac970be6d35aff30189f304797c36f7b95e0af2c334ab472e8041cca0fdb2"} Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.989640 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47ac970be6d35aff30189f304797c36f7b95e0af2c334ab472e8041cca0fdb2" Mar 12 16:46:04 crc kubenswrapper[4687]: I0312 16:46:04.989443 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555566-s4gkh" Mar 12 16:46:05 crc kubenswrapper[4687]: I0312 16:46:05.436718 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555560-dlhfz"] Mar 12 16:46:05 crc kubenswrapper[4687]: I0312 16:46:05.448427 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555560-dlhfz"] Mar 12 16:46:05 crc kubenswrapper[4687]: I0312 16:46:05.747216 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785c3ff1-a21e-4abf-a8a1-18f5c270b0a8" path="/var/lib/kubelet/pods/785c3ff1-a21e-4abf-a8a1-18f5c270b0a8/volumes" Mar 12 16:46:33 crc kubenswrapper[4687]: I0312 16:46:33.722066 4687 scope.go:117] "RemoveContainer" containerID="198009147b2b53efd5f4ccfe80435ac037301bbf7954d0592ace504e1d79a7fc" Mar 12 16:46:54 crc kubenswrapper[4687]: I0312 16:46:54.555782 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4e8ecd3-b38d-4144-9662-098445ab656b" containerID="a6c9620a60a6f30a52404454d2a8b6ab8546d3135601cd8b9f6bab106ae7c860" exitCode=0 Mar 12 16:46:54 crc kubenswrapper[4687]: I0312 16:46:54.555871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" event={"ID":"f4e8ecd3-b38d-4144-9662-098445ab656b","Type":"ContainerDied","Data":"a6c9620a60a6f30a52404454d2a8b6ab8546d3135601cd8b9f6bab106ae7c860"} Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.113718 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235205 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-3\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235293 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-ssh-key-openstack-edpm-ipam\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235342 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9fwn\" (UniqueName: \"kubernetes.io/projected/f4e8ecd3-b38d-4144-9662-098445ab656b-kube-api-access-l9fwn\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235388 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-inventory\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235474 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-1\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235577 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-1\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235622 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-extra-config-0\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235680 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-combined-ca-bundle\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235704 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-0\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-0\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.235848 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-2\") pod \"f4e8ecd3-b38d-4144-9662-098445ab656b\" (UID: \"f4e8ecd3-b38d-4144-9662-098445ab656b\") " Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.255638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.256067 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e8ecd3-b38d-4144-9662-098445ab656b-kube-api-access-l9fwn" (OuterVolumeSpecName: "kube-api-access-l9fwn") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "kube-api-access-l9fwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.273631 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.283667 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.301336 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.311136 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.320261 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.330310 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.331544 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-inventory" (OuterVolumeSpecName: "inventory") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.332650 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338903 4687 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338937 4687 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338951 4687 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338962 4687 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338975 4687 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338987 4687 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.338998 4687 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.339010 4687 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.339022 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9fwn\" (UniqueName: \"kubernetes.io/projected/f4e8ecd3-b38d-4144-9662-098445ab656b-kube-api-access-l9fwn\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.339034 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.349343 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4e8ecd3-b38d-4144-9662-098445ab656b" (UID: "f4e8ecd3-b38d-4144-9662-098445ab656b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.440985 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e8ecd3-b38d-4144-9662-098445ab656b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.594533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" event={"ID":"f4e8ecd3-b38d-4144-9662-098445ab656b","Type":"ContainerDied","Data":"e4ddd81104d1453430689dd6f371532cef94f0b0838fc4e6f58fbd2e410be42a"} Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.595161 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ddd81104d1453430689dd6f371532cef94f0b0838fc4e6f58fbd2e410be42a" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.594598 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b762t" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.700250 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9"] Mar 12 16:46:56 crc kubenswrapper[4687]: E0312 16:46:56.700872 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b163ef-54ed-47b4-8284-93ffc647b4ef" containerName="oc" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.700898 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b163ef-54ed-47b4-8284-93ffc647b4ef" containerName="oc" Mar 12 16:46:56 crc kubenswrapper[4687]: E0312 16:46:56.700928 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e8ecd3-b38d-4144-9662-098445ab656b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.700936 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e8ecd3-b38d-4144-9662-098445ab656b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.701395 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e8ecd3-b38d-4144-9662-098445ab656b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.701411 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b163ef-54ed-47b4-8284-93ffc647b4ef" containerName="oc" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.703849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.705628 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.705676 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.708451 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.708784 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.712047 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.731386 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9"] Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.851059 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.851319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.851478 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shg6p\" (UniqueName: \"kubernetes.io/projected/3c549521-57d3-4f63-b447-5700df3e3a47-kube-api-access-shg6p\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.851534 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.851698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.851768 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.852590 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.956230 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shg6p\" (UniqueName: \"kubernetes.io/projected/3c549521-57d3-4f63-b447-5700df3e3a47-kube-api-access-shg6p\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.956505 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.956748 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.956864 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.956934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.957159 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.957610 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.961153 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.961235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.961657 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.961757 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.962269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.963045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:56 crc kubenswrapper[4687]: I0312 16:46:56.979863 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shg6p\" (UniqueName: \"kubernetes.io/projected/3c549521-57d3-4f63-b447-5700df3e3a47-kube-api-access-shg6p\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:57 crc kubenswrapper[4687]: I0312 16:46:57.023831 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:46:57 crc kubenswrapper[4687]: I0312 16:46:57.585778 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9"] Mar 12 16:46:57 crc kubenswrapper[4687]: W0312 16:46:57.593836 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c549521_57d3_4f63_b447_5700df3e3a47.slice/crio-ca7d2c4c933f281c713153c74b262b9afe82c6aab26af262e992bf9b2de88c03 WatchSource:0}: Error finding container ca7d2c4c933f281c713153c74b262b9afe82c6aab26af262e992bf9b2de88c03: Status 404 returned error can't find the container with id ca7d2c4c933f281c713153c74b262b9afe82c6aab26af262e992bf9b2de88c03 Mar 12 16:46:57 crc kubenswrapper[4687]: I0312 16:46:57.609392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" event={"ID":"3c549521-57d3-4f63-b447-5700df3e3a47","Type":"ContainerStarted","Data":"ca7d2c4c933f281c713153c74b262b9afe82c6aab26af262e992bf9b2de88c03"} Mar 12 16:46:58 crc kubenswrapper[4687]: I0312 16:46:58.620958 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" event={"ID":"3c549521-57d3-4f63-b447-5700df3e3a47","Type":"ContainerStarted","Data":"be1a3e2829a14818b9d84c32d79b802988d088d541b5c2399d70266f2e3e5761"} Mar 12 16:46:58 crc kubenswrapper[4687]: I0312 16:46:58.654216 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" podStartSLOduration=2.07813928 podStartE2EDuration="2.654193165s" podCreationTimestamp="2026-03-12 16:46:56 +0000 UTC" firstStartedPulling="2026-03-12 16:46:57.596641412 +0000 UTC m=+2666.560603756" lastFinishedPulling="2026-03-12 16:46:58.172695297 +0000 UTC m=+2667.136657641" observedRunningTime="2026-03-12 16:46:58.637006314 +0000 UTC m=+2667.600968688" watchObservedRunningTime="2026-03-12 16:46:58.654193165 +0000 UTC m=+2667.618155519" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.023001 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbgkx"] Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.026907 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.042724 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbgkx"] Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.108213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d2daa11-2756-4a7c-860a-44c13ab92d91-utilities\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.108600 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d2daa11-2756-4a7c-860a-44c13ab92d91-catalog-content\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.108671 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-795nq\" (UniqueName: \"kubernetes.io/projected/5d2daa11-2756-4a7c-860a-44c13ab92d91-kube-api-access-795nq\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.211299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d2daa11-2756-4a7c-860a-44c13ab92d91-utilities\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.211462 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d2daa11-2756-4a7c-860a-44c13ab92d91-catalog-content\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.211499 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-795nq\" (UniqueName: \"kubernetes.io/projected/5d2daa11-2756-4a7c-860a-44c13ab92d91-kube-api-access-795nq\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.212287 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d2daa11-2756-4a7c-860a-44c13ab92d91-utilities\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.212521 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d2daa11-2756-4a7c-860a-44c13ab92d91-catalog-content\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.237987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-795nq\" (UniqueName: \"kubernetes.io/projected/5d2daa11-2756-4a7c-860a-44c13ab92d91-kube-api-access-795nq\") pod \"redhat-operators-lbgkx\" (UID: \"5d2daa11-2756-4a7c-860a-44c13ab92d91\") " pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.347068 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:08 crc kubenswrapper[4687]: I0312 16:47:08.857679 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbgkx"] Mar 12 16:47:09 crc kubenswrapper[4687]: I0312 16:47:09.758472 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerID="a4e2b30274f2a005af197cb1ec7e00b8a74752449375f0b0bb4aa388e1c8f432" exitCode=0 Mar 12 16:47:09 crc kubenswrapper[4687]: I0312 16:47:09.758546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgkx" event={"ID":"5d2daa11-2756-4a7c-860a-44c13ab92d91","Type":"ContainerDied","Data":"a4e2b30274f2a005af197cb1ec7e00b8a74752449375f0b0bb4aa388e1c8f432"} Mar 12 16:47:09 crc kubenswrapper[4687]: I0312 16:47:09.758833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgkx" event={"ID":"5d2daa11-2756-4a7c-860a-44c13ab92d91","Type":"ContainerStarted","Data":"e270714161e56fce5879f3fd8bb310cb115c563b69bf217a0d443270b9764b98"} Mar 12 16:47:21 crc kubenswrapper[4687]: I0312 16:47:21.914062 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgkx" event={"ID":"5d2daa11-2756-4a7c-860a-44c13ab92d91","Type":"ContainerStarted","Data":"a2fba6f8f76fd49dcb2dc7393e461c92a9d15e5fe68849c7c0833f315da967f5"} Mar 12 16:47:23 crc kubenswrapper[4687]: I0312 16:47:23.944530 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerID="a2fba6f8f76fd49dcb2dc7393e461c92a9d15e5fe68849c7c0833f315da967f5" exitCode=0 Mar 12 16:47:23 crc kubenswrapper[4687]: I0312 16:47:23.944624 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgkx" event={"ID":"5d2daa11-2756-4a7c-860a-44c13ab92d91","Type":"ContainerDied","Data":"a2fba6f8f76fd49dcb2dc7393e461c92a9d15e5fe68849c7c0833f315da967f5"} Mar 12 16:47:25 crc kubenswrapper[4687]: I0312 16:47:25.973211 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgkx" event={"ID":"5d2daa11-2756-4a7c-860a-44c13ab92d91","Type":"ContainerStarted","Data":"4b01fe2deab17db679191498ed9b12c38349f6a062d6ba2f433f01778addf562"} Mar 12 16:47:26 crc kubenswrapper[4687]: I0312 16:47:26.008976 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lbgkx" podStartSLOduration=3.799187268 podStartE2EDuration="19.008958711s" podCreationTimestamp="2026-03-12 16:47:07 +0000 UTC" firstStartedPulling="2026-03-12 16:47:09.761249162 +0000 UTC m=+2678.725211506" lastFinishedPulling="2026-03-12 16:47:24.971020595 +0000 UTC m=+2693.934982949" observedRunningTime="2026-03-12 16:47:25.992417048 +0000 UTC m=+2694.956379402" watchObservedRunningTime="2026-03-12 16:47:26.008958711 +0000 UTC m=+2694.972921055" Mar 12 16:47:28 crc kubenswrapper[4687]: I0312 16:47:28.348393 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:28 crc kubenswrapper[4687]: I0312 16:47:28.348668 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:29 crc kubenswrapper[4687]: I0312 16:47:29.395187 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lbgkx" podUID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerName="registry-server" probeResult="failure" output=< Mar 12 16:47:29 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:47:29 crc kubenswrapper[4687]: > Mar 12 16:47:39 crc kubenswrapper[4687]: I0312 16:47:39.403683 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lbgkx" podUID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerName="registry-server" probeResult="failure" output=< Mar 12 16:47:39 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:47:39 crc kubenswrapper[4687]: > Mar 12 16:47:44 crc kubenswrapper[4687]: I0312 16:47:44.121917 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:47:44 crc kubenswrapper[4687]: I0312 16:47:44.122582 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:47:48 crc kubenswrapper[4687]: I0312 16:47:48.413513 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:48 crc kubenswrapper[4687]: I0312 16:47:48.493442 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lbgkx" Mar 12 16:47:48 crc kubenswrapper[4687]: I0312 16:47:48.618498 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbgkx"] Mar 12 16:47:48 crc kubenswrapper[4687]: I0312 16:47:48.677023 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:47:48 crc kubenswrapper[4687]: I0312 16:47:48.677256 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9b47" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="registry-server" containerID="cri-o://df5c4edfd19adb4ba5e38fa774d6eb5ba06f70c2fc6381ccf8c0f9823f0c7f10" gracePeriod=2 Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.261684 4687 generic.go:334] "Generic (PLEG): container finished" podID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerID="df5c4edfd19adb4ba5e38fa774d6eb5ba06f70c2fc6381ccf8c0f9823f0c7f10" exitCode=0 Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.261945 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerDied","Data":"df5c4edfd19adb4ba5e38fa774d6eb5ba06f70c2fc6381ccf8c0f9823f0c7f10"} Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.262259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9b47" event={"ID":"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520","Type":"ContainerDied","Data":"88e28dbc7ae88fd473836fe4fee7f8af6568c939ae77e919516ba1abea11c8d4"} Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.262301 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88e28dbc7ae88fd473836fe4fee7f8af6568c939ae77e919516ba1abea11c8d4" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.349552 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.476813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-catalog-content\") pod \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.477092 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-utilities\") pod \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.477216 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxvtx\" (UniqueName: \"kubernetes.io/projected/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-kube-api-access-wxvtx\") pod \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\" (UID: \"8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520\") " Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.478778 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-utilities" (OuterVolumeSpecName: "utilities") pod "8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" (UID: "8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.498793 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-kube-api-access-wxvtx" (OuterVolumeSpecName: "kube-api-access-wxvtx") pod "8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" (UID: "8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520"). InnerVolumeSpecName "kube-api-access-wxvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.579454 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxvtx\" (UniqueName: \"kubernetes.io/projected/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-kube-api-access-wxvtx\") on node \"crc\" DevicePath \"\"" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.579485 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.605246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" (UID: "8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:47:49 crc kubenswrapper[4687]: I0312 16:47:49.681970 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:47:50 crc kubenswrapper[4687]: I0312 16:47:50.275435 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9b47" Mar 12 16:47:50 crc kubenswrapper[4687]: I0312 16:47:50.302664 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:47:50 crc kubenswrapper[4687]: I0312 16:47:50.312916 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9b47"] Mar 12 16:47:51 crc kubenswrapper[4687]: I0312 16:47:51.745558 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" path="/var/lib/kubelet/pods/8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520/volumes" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.337984 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw"] Mar 12 16:47:52 crc kubenswrapper[4687]: E0312 16:47:52.338639 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="extract-utilities" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.338658 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="extract-utilities" Mar 12 16:47:52 crc kubenswrapper[4687]: E0312 16:47:52.338675 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="registry-server" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.338683 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="registry-server" Mar 12 16:47:52 crc kubenswrapper[4687]: E0312 16:47:52.338703 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="extract-content" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.338709 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="extract-content" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.338940 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a5d13f6-5ebe-406d-a1b7-b7aeeef8a520" containerName="registry-server" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.340517 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.342676 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.351539 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw"] Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.447779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.447865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.447975 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2tst\" (UniqueName: \"kubernetes.io/projected/a1d956c5-a568-4aff-ab9c-0f64eda22177-kube-api-access-g2tst\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.550498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.550824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.551016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2tst\" (UniqueName: \"kubernetes.io/projected/a1d956c5-a568-4aff-ab9c-0f64eda22177-kube-api-access-g2tst\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.551216 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.551448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.574591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2tst\" (UniqueName: \"kubernetes.io/projected/a1d956c5-a568-4aff-ab9c-0f64eda22177-kube-api-access-g2tst\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:52 crc kubenswrapper[4687]: I0312 16:47:52.665330 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:53 crc kubenswrapper[4687]: I0312 16:47:53.198882 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw"] Mar 12 16:47:53 crc kubenswrapper[4687]: I0312 16:47:53.313785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" event={"ID":"a1d956c5-a568-4aff-ab9c-0f64eda22177","Type":"ContainerStarted","Data":"e42e06ff04d22a20f8b4c2b9b8c6900a1d6af029bad691ac3113d0bad0485055"} Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.110268 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58"] Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.112643 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.130322 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58"] Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.189328 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.189392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvzq\" (UniqueName: \"kubernetes.io/projected/cff33c98-5d1d-4726-adde-df3333665efa-kube-api-access-tpvzq\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.189428 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.291843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.291889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpvzq\" (UniqueName: \"kubernetes.io/projected/cff33c98-5d1d-4726-adde-df3333665efa-kube-api-access-tpvzq\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.291917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.292559 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.292582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.312058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpvzq\" (UniqueName: \"kubernetes.io/projected/cff33c98-5d1d-4726-adde-df3333665efa-kube-api-access-tpvzq\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.324475 4687 generic.go:334] "Generic (PLEG): container finished" podID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerID="863168e95d7005b3b525a740ec9e703948b155e1e31a406c592abab551f9cfdc" exitCode=0 Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.324585 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" event={"ID":"a1d956c5-a568-4aff-ab9c-0f64eda22177","Type":"ContainerDied","Data":"863168e95d7005b3b525a740ec9e703948b155e1e31a406c592abab551f9cfdc"} Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.433885 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:54 crc kubenswrapper[4687]: I0312 16:47:54.920852 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58"] Mar 12 16:47:54 crc kubenswrapper[4687]: W0312 16:47:54.933414 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff33c98_5d1d_4726_adde_df3333665efa.slice/crio-760618169be4d6113f92cde26e1f8fca3a4363fe49923e7d7705511b818e7c1b WatchSource:0}: Error finding container 760618169be4d6113f92cde26e1f8fca3a4363fe49923e7d7705511b818e7c1b: Status 404 returned error can't find the container with id 760618169be4d6113f92cde26e1f8fca3a4363fe49923e7d7705511b818e7c1b Mar 12 16:47:55 crc kubenswrapper[4687]: I0312 16:47:55.336416 4687 generic.go:334] "Generic (PLEG): container finished" podID="cff33c98-5d1d-4726-adde-df3333665efa" containerID="dba3b4d1376a7f86275c7205541d58a52433855c74e1b5dae134598f425ca6fa" exitCode=0 Mar 12 16:47:55 crc kubenswrapper[4687]: I0312 16:47:55.336484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" event={"ID":"cff33c98-5d1d-4726-adde-df3333665efa","Type":"ContainerDied","Data":"dba3b4d1376a7f86275c7205541d58a52433855c74e1b5dae134598f425ca6fa"} Mar 12 16:47:55 crc kubenswrapper[4687]: I0312 16:47:55.336680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" event={"ID":"cff33c98-5d1d-4726-adde-df3333665efa","Type":"ContainerStarted","Data":"760618169be4d6113f92cde26e1f8fca3a4363fe49923e7d7705511b818e7c1b"} Mar 12 16:47:55 crc kubenswrapper[4687]: E0312 16:47:55.861650 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d956c5_a568_4aff_ab9c_0f64eda22177.slice/crio-conmon-1594376063f82379710adf99527af08e3d372fd58b77bf41e3defda14cccad5a.scope\": RecentStats: unable to find data in memory cache]" Mar 12 16:47:56 crc kubenswrapper[4687]: I0312 16:47:56.352267 4687 generic.go:334] "Generic (PLEG): container finished" podID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerID="1594376063f82379710adf99527af08e3d372fd58b77bf41e3defda14cccad5a" exitCode=0 Mar 12 16:47:56 crc kubenswrapper[4687]: I0312 16:47:56.352323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" event={"ID":"a1d956c5-a568-4aff-ab9c-0f64eda22177","Type":"ContainerDied","Data":"1594376063f82379710adf99527af08e3d372fd58b77bf41e3defda14cccad5a"} Mar 12 16:47:57 crc kubenswrapper[4687]: I0312 16:47:57.362965 4687 generic.go:334] "Generic (PLEG): container finished" podID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerID="e6a466d5d692a3dc7a7c58f3f64572ed87fc3166db67f6ffa0de3bccbea4623a" exitCode=0 Mar 12 16:47:57 crc kubenswrapper[4687]: I0312 16:47:57.363037 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" event={"ID":"a1d956c5-a568-4aff-ab9c-0f64eda22177","Type":"ContainerDied","Data":"e6a466d5d692a3dc7a7c58f3f64572ed87fc3166db67f6ffa0de3bccbea4623a"} Mar 12 16:47:57 crc kubenswrapper[4687]: I0312 16:47:57.366036 4687 generic.go:334] "Generic (PLEG): container finished" podID="cff33c98-5d1d-4726-adde-df3333665efa" containerID="e22f8402f4cce028c0fe3cbc56e2e010bd1d4ab4b74717cf915070ad9b20860b" exitCode=0 Mar 12 16:47:57 crc kubenswrapper[4687]: I0312 16:47:57.366084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" event={"ID":"cff33c98-5d1d-4726-adde-df3333665efa","Type":"ContainerDied","Data":"e22f8402f4cce028c0fe3cbc56e2e010bd1d4ab4b74717cf915070ad9b20860b"} Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.395343 4687 generic.go:334] "Generic (PLEG): container finished" podID="cff33c98-5d1d-4726-adde-df3333665efa" containerID="dc2185ec75a3ab6923fa442d5e60b92c8f09196b8c325877c843dd848005097c" exitCode=0 Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.395427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" event={"ID":"cff33c98-5d1d-4726-adde-df3333665efa","Type":"ContainerDied","Data":"dc2185ec75a3ab6923fa442d5e60b92c8f09196b8c325877c843dd848005097c"} Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.857853 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.907295 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2tst\" (UniqueName: \"kubernetes.io/projected/a1d956c5-a568-4aff-ab9c-0f64eda22177-kube-api-access-g2tst\") pod \"a1d956c5-a568-4aff-ab9c-0f64eda22177\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.907453 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-util\") pod \"a1d956c5-a568-4aff-ab9c-0f64eda22177\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.907543 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-bundle\") pod \"a1d956c5-a568-4aff-ab9c-0f64eda22177\" (UID: \"a1d956c5-a568-4aff-ab9c-0f64eda22177\") " Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.908571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-bundle" (OuterVolumeSpecName: "bundle") pod "a1d956c5-a568-4aff-ab9c-0f64eda22177" (UID: "a1d956c5-a568-4aff-ab9c-0f64eda22177"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:47:58 crc kubenswrapper[4687]: I0312 16:47:58.913533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d956c5-a568-4aff-ab9c-0f64eda22177-kube-api-access-g2tst" (OuterVolumeSpecName: "kube-api-access-g2tst") pod "a1d956c5-a568-4aff-ab9c-0f64eda22177" (UID: "a1d956c5-a568-4aff-ab9c-0f64eda22177"). InnerVolumeSpecName "kube-api-access-g2tst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.010205 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.010418 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2tst\" (UniqueName: \"kubernetes.io/projected/a1d956c5-a568-4aff-ab9c-0f64eda22177-kube-api-access-g2tst\") on node \"crc\" DevicePath \"\"" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.114259 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-util" (OuterVolumeSpecName: "util") pod "a1d956c5-a568-4aff-ab9c-0f64eda22177" (UID: "a1d956c5-a568-4aff-ab9c-0f64eda22177"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.214806 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1d956c5-a568-4aff-ab9c-0f64eda22177-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.411016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" event={"ID":"a1d956c5-a568-4aff-ab9c-0f64eda22177","Type":"ContainerDied","Data":"e42e06ff04d22a20f8b4c2b9b8c6900a1d6af029bad691ac3113d0bad0485055"} Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.411065 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42e06ff04d22a20f8b4c2b9b8c6900a1d6af029bad691ac3113d0bad0485055" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.411123 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.865368 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.942057 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-util\") pod \"cff33c98-5d1d-4726-adde-df3333665efa\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.942509 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpvzq\" (UniqueName: \"kubernetes.io/projected/cff33c98-5d1d-4726-adde-df3333665efa-kube-api-access-tpvzq\") pod \"cff33c98-5d1d-4726-adde-df3333665efa\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.942588 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-bundle\") pod \"cff33c98-5d1d-4726-adde-df3333665efa\" (UID: \"cff33c98-5d1d-4726-adde-df3333665efa\") " Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.944373 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-bundle" (OuterVolumeSpecName: "bundle") pod "cff33c98-5d1d-4726-adde-df3333665efa" (UID: "cff33c98-5d1d-4726-adde-df3333665efa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.949856 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff33c98-5d1d-4726-adde-df3333665efa-kube-api-access-tpvzq" (OuterVolumeSpecName: "kube-api-access-tpvzq") pod "cff33c98-5d1d-4726-adde-df3333665efa" (UID: "cff33c98-5d1d-4726-adde-df3333665efa"). InnerVolumeSpecName "kube-api-access-tpvzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:47:59 crc kubenswrapper[4687]: I0312 16:47:59.976281 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-util" (OuterVolumeSpecName: "util") pod "cff33c98-5d1d-4726-adde-df3333665efa" (UID: "cff33c98-5d1d-4726-adde-df3333665efa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.046903 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpvzq\" (UniqueName: \"kubernetes.io/projected/cff33c98-5d1d-4726-adde-df3333665efa-kube-api-access-tpvzq\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.046961 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.046970 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cff33c98-5d1d-4726-adde-df3333665efa-util\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.143495 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555568-qcbw9"] Mar 12 16:48:00 crc kubenswrapper[4687]: E0312 16:48:00.144034 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="pull" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144056 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="pull" Mar 12 16:48:00 crc kubenswrapper[4687]: E0312 16:48:00.144073 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="extract" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144082 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="extract" Mar 12 16:48:00 crc kubenswrapper[4687]: E0312 16:48:00.144120 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="util" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144127 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="util" Mar 12 16:48:00 crc kubenswrapper[4687]: E0312 16:48:00.144141 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="util" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144147 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="util" Mar 12 16:48:00 crc kubenswrapper[4687]: E0312 16:48:00.144158 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="pull" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144164 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="pull" Mar 12 16:48:00 crc kubenswrapper[4687]: E0312 16:48:00.144178 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="extract" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144184 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="extract" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144481 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff33c98-5d1d-4726-adde-df3333665efa" containerName="extract" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.144522 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d956c5-a568-4aff-ab9c-0f64eda22177" containerName="extract" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.145544 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.147827 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.148001 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.148123 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.157323 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555568-qcbw9"] Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.253611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwqz\" (UniqueName: \"kubernetes.io/projected/5ddcfa53-3e74-447c-b250-0211d7bb9e2d-kube-api-access-rjwqz\") pod \"auto-csr-approver-29555568-qcbw9\" (UID: \"5ddcfa53-3e74-447c-b250-0211d7bb9e2d\") " pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.355724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwqz\" (UniqueName: \"kubernetes.io/projected/5ddcfa53-3e74-447c-b250-0211d7bb9e2d-kube-api-access-rjwqz\") pod \"auto-csr-approver-29555568-qcbw9\" (UID: \"5ddcfa53-3e74-447c-b250-0211d7bb9e2d\") " pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.377329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwqz\" (UniqueName: \"kubernetes.io/projected/5ddcfa53-3e74-447c-b250-0211d7bb9e2d-kube-api-access-rjwqz\") pod \"auto-csr-approver-29555568-qcbw9\" (UID: \"5ddcfa53-3e74-447c-b250-0211d7bb9e2d\") " pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.423631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" event={"ID":"cff33c98-5d1d-4726-adde-df3333665efa","Type":"ContainerDied","Data":"760618169be4d6113f92cde26e1f8fca3a4363fe49923e7d7705511b818e7c1b"} Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.423680 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760618169be4d6113f92cde26e1f8fca3a4363fe49923e7d7705511b818e7c1b" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.423717 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.466503 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:00 crc kubenswrapper[4687]: I0312 16:48:00.973235 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555568-qcbw9"] Mar 12 16:48:01 crc kubenswrapper[4687]: I0312 16:48:01.434547 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" event={"ID":"5ddcfa53-3e74-447c-b250-0211d7bb9e2d","Type":"ContainerStarted","Data":"6e03109294ef755ac94a56b23a99b5130c6f91b09833d62d209d85cd4c39d1d4"} Mar 12 16:48:02 crc kubenswrapper[4687]: I0312 16:48:02.444552 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" event={"ID":"5ddcfa53-3e74-447c-b250-0211d7bb9e2d","Type":"ContainerStarted","Data":"2188dd9ff0425ba50c50600c3a1cbbf3efe154de0d3741e4f9c1f04067bff7b5"} Mar 12 16:48:02 crc kubenswrapper[4687]: I0312 16:48:02.467730 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" podStartSLOduration=1.51220358 podStartE2EDuration="2.467700752s" podCreationTimestamp="2026-03-12 16:48:00 +0000 UTC" firstStartedPulling="2026-03-12 16:48:00.975924643 +0000 UTC m=+2729.939887007" lastFinishedPulling="2026-03-12 16:48:01.931421835 +0000 UTC m=+2730.895384179" observedRunningTime="2026-03-12 16:48:02.459514388 +0000 UTC m=+2731.423476762" watchObservedRunningTime="2026-03-12 16:48:02.467700752 +0000 UTC m=+2731.431663126" Mar 12 16:48:03 crc kubenswrapper[4687]: I0312 16:48:03.466471 4687 generic.go:334] "Generic (PLEG): container finished" podID="5ddcfa53-3e74-447c-b250-0211d7bb9e2d" containerID="2188dd9ff0425ba50c50600c3a1cbbf3efe154de0d3741e4f9c1f04067bff7b5" exitCode=0 Mar 12 16:48:03 crc kubenswrapper[4687]: I0312 16:48:03.467564 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" event={"ID":"5ddcfa53-3e74-447c-b250-0211d7bb9e2d","Type":"ContainerDied","Data":"2188dd9ff0425ba50c50600c3a1cbbf3efe154de0d3741e4f9c1f04067bff7b5"} Mar 12 16:48:04 crc kubenswrapper[4687]: I0312 16:48:04.925789 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:05 crc kubenswrapper[4687]: I0312 16:48:05.076872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjwqz\" (UniqueName: \"kubernetes.io/projected/5ddcfa53-3e74-447c-b250-0211d7bb9e2d-kube-api-access-rjwqz\") pod \"5ddcfa53-3e74-447c-b250-0211d7bb9e2d\" (UID: \"5ddcfa53-3e74-447c-b250-0211d7bb9e2d\") " Mar 12 16:48:05 crc kubenswrapper[4687]: I0312 16:48:05.083426 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddcfa53-3e74-447c-b250-0211d7bb9e2d-kube-api-access-rjwqz" (OuterVolumeSpecName: "kube-api-access-rjwqz") pod "5ddcfa53-3e74-447c-b250-0211d7bb9e2d" (UID: "5ddcfa53-3e74-447c-b250-0211d7bb9e2d"). InnerVolumeSpecName "kube-api-access-rjwqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:48:05 crc kubenswrapper[4687]: I0312 16:48:05.179404 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjwqz\" (UniqueName: \"kubernetes.io/projected/5ddcfa53-3e74-447c-b250-0211d7bb9e2d-kube-api-access-rjwqz\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:05 crc kubenswrapper[4687]: I0312 16:48:05.489430 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" event={"ID":"5ddcfa53-3e74-447c-b250-0211d7bb9e2d","Type":"ContainerDied","Data":"6e03109294ef755ac94a56b23a99b5130c6f91b09833d62d209d85cd4c39d1d4"} Mar 12 16:48:05 crc kubenswrapper[4687]: I0312 16:48:05.489666 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e03109294ef755ac94a56b23a99b5130c6f91b09833d62d209d85cd4c39d1d4" Mar 12 16:48:05 crc kubenswrapper[4687]: I0312 16:48:05.489509 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555568-qcbw9" Mar 12 16:48:06 crc kubenswrapper[4687]: I0312 16:48:06.001435 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555562-7d6lm"] Mar 12 16:48:06 crc kubenswrapper[4687]: I0312 16:48:06.014619 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555562-7d6lm"] Mar 12 16:48:07 crc kubenswrapper[4687]: I0312 16:48:07.771809 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca450e45-1f11-4741-8038-97071a2a313d" path="/var/lib/kubelet/pods/ca450e45-1f11-4741-8038-97071a2a313d/volumes" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.045589 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl"] Mar 12 16:48:10 crc kubenswrapper[4687]: E0312 16:48:10.048654 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddcfa53-3e74-447c-b250-0211d7bb9e2d" containerName="oc" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.048687 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddcfa53-3e74-447c-b250-0211d7bb9e2d" containerName="oc" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.049080 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddcfa53-3e74-447c-b250-0211d7bb9e2d" containerName="oc" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.050011 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.065757 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl"] Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.209638 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5zfc\" (UniqueName: \"kubernetes.io/projected/c59704b5-c215-4306-9e1e-05ee0bfc055e-kube-api-access-t5zfc\") pod \"cluster-logging-operator-66689c4bbf-rb2sl\" (UID: \"c59704b5-c215-4306-9e1e-05ee0bfc055e\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.312792 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5zfc\" (UniqueName: \"kubernetes.io/projected/c59704b5-c215-4306-9e1e-05ee0bfc055e-kube-api-access-t5zfc\") pod \"cluster-logging-operator-66689c4bbf-rb2sl\" (UID: \"c59704b5-c215-4306-9e1e-05ee0bfc055e\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.336613 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5zfc\" (UniqueName: \"kubernetes.io/projected/c59704b5-c215-4306-9e1e-05ee0bfc055e-kube-api-access-t5zfc\") pod \"cluster-logging-operator-66689c4bbf-rb2sl\" (UID: \"c59704b5-c215-4306-9e1e-05ee0bfc055e\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.374343 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" Mar 12 16:48:10 crc kubenswrapper[4687]: I0312 16:48:10.866245 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl"] Mar 12 16:48:10 crc kubenswrapper[4687]: W0312 16:48:10.874581 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc59704b5_c215_4306_9e1e_05ee0bfc055e.slice/crio-3b9f904fa2ebb63fb0b74cc5308f7f325bdffca9757cf711582e7544568fb0f5 WatchSource:0}: Error finding container 3b9f904fa2ebb63fb0b74cc5308f7f325bdffca9757cf711582e7544568fb0f5: Status 404 returned error can't find the container with id 3b9f904fa2ebb63fb0b74cc5308f7f325bdffca9757cf711582e7544568fb0f5 Mar 12 16:48:11 crc kubenswrapper[4687]: I0312 16:48:11.577534 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" event={"ID":"c59704b5-c215-4306-9e1e-05ee0bfc055e","Type":"ContainerStarted","Data":"3b9f904fa2ebb63fb0b74cc5308f7f325bdffca9757cf711582e7544568fb0f5"} Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.122115 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.122621 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.745399 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks"] Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.748093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.788134 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ad1e9b15-9444-4099-8db5-3cab67383bf4-manager-config\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.788193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.788235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92k8\" (UniqueName: \"kubernetes.io/projected/ad1e9b15-9444-4099-8db5-3cab67383bf4-kube-api-access-v92k8\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.788264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-webhook-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.788305 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-apiservice-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.789464 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks"] Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.890843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-apiservice-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.891041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ad1e9b15-9444-4099-8db5-3cab67383bf4-manager-config\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.891071 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.891106 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92k8\" (UniqueName: \"kubernetes.io/projected/ad1e9b15-9444-4099-8db5-3cab67383bf4-kube-api-access-v92k8\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.891133 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-webhook-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.892463 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ad1e9b15-9444-4099-8db5-3cab67383bf4-manager-config\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.900455 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-apiservice-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.903401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-webhook-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.913172 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:14 crc kubenswrapper[4687]: I0312 16:48:14.913663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92k8\" (UniqueName: \"kubernetes.io/projected/ad1e9b15-9444-4099-8db5-3cab67383bf4-kube-api-access-v92k8\") pod \"loki-operator-controller-manager-649d5ff64d-9zcks\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:15 crc kubenswrapper[4687]: I0312 16:48:15.083347 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:17 crc kubenswrapper[4687]: I0312 16:48:17.562922 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks"] Mar 12 16:48:17 crc kubenswrapper[4687]: I0312 16:48:17.695545 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" event={"ID":"ad1e9b15-9444-4099-8db5-3cab67383bf4","Type":"ContainerStarted","Data":"b0ac81c4d3587e9cba8816fa1d3642eb0a17165cacfc64ed90c44ebae17b52d5"} Mar 12 16:48:17 crc kubenswrapper[4687]: I0312 16:48:17.696833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" event={"ID":"c59704b5-c215-4306-9e1e-05ee0bfc055e","Type":"ContainerStarted","Data":"453d795ba19d91fb3592e7594b10cab557975896e85acf4720ac01a2d487dace"} Mar 12 16:48:17 crc kubenswrapper[4687]: I0312 16:48:17.729074 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-66689c4bbf-rb2sl" podStartSLOduration=1.383123429 podStartE2EDuration="7.729054015s" podCreationTimestamp="2026-03-12 16:48:10 +0000 UTC" firstStartedPulling="2026-03-12 16:48:10.885465259 +0000 UTC m=+2739.849427603" lastFinishedPulling="2026-03-12 16:48:17.231395845 +0000 UTC m=+2746.195358189" observedRunningTime="2026-03-12 16:48:17.717970772 +0000 UTC m=+2746.681933116" watchObservedRunningTime="2026-03-12 16:48:17.729054015 +0000 UTC m=+2746.693016359" Mar 12 16:48:17 crc kubenswrapper[4687]: I0312 16:48:17.767030 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-qpjwx"] Mar 12 16:48:17 crc kubenswrapper[4687]: I0312 16:48:17.767260 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" podUID="3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" containerName="cluster-logging-operator" containerID="cri-o://79342d023bb622e2a292e72cdb115a099cd4d60a12d93d470ca86ca3efb86c2f" gracePeriod=30 Mar 12 16:48:18 crc kubenswrapper[4687]: I0312 16:48:18.547716 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-5rr22"] Mar 12 16:48:18 crc kubenswrapper[4687]: I0312 16:48:18.548432 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/collector-5rr22" podUID="07b53efc-2b5d-409e-a999-9504a96ff173" containerName="collector" containerID="cri-o://a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6" gracePeriod=10 Mar 12 16:48:18 crc kubenswrapper[4687]: I0312 16:48:18.721990 4687 generic.go:334] "Generic (PLEG): container finished" podID="3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" containerID="79342d023bb622e2a292e72cdb115a099cd4d60a12d93d470ca86ca3efb86c2f" exitCode=0 Mar 12 16:48:18 crc kubenswrapper[4687]: I0312 16:48:18.722095 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" event={"ID":"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea","Type":"ContainerDied","Data":"79342d023bb622e2a292e72cdb115a099cd4d60a12d93d470ca86ca3efb86c2f"} Mar 12 16:48:18 crc kubenswrapper[4687]: I0312 16:48:18.980216 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.129188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqscw\" (UniqueName: \"kubernetes.io/projected/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea-kube-api-access-bqscw\") pod \"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea\" (UID: \"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.135934 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea-kube-api-access-bqscw" (OuterVolumeSpecName: "kube-api-access-bqscw") pod "3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" (UID: "3cb4eb93-d3ba-469b-85fd-fe482c1b53ea"). InnerVolumeSpecName "kube-api-access-bqscw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.236582 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqscw\" (UniqueName: \"kubernetes.io/projected/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea-kube-api-access-bqscw\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.297478 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5rr22" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340200 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-entrypoint\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340269 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-trusted-ca\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340294 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-sa-token\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340338 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config-openshift-service-cacrt\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340426 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b53efc-2b5d-409e-a999-9504a96ff173-tmp\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340483 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07b53efc-2b5d-409e-a999-9504a96ff173-datadir\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340549 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-metrics\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340578 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-token\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340615 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-syslog-receiver\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340656 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ghbx\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-kube-api-access-8ghbx\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.340708 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config\") pod \"07b53efc-2b5d-409e-a999-9504a96ff173\" (UID: \"07b53efc-2b5d-409e-a999-9504a96ff173\") " Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.345058 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.345708 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07b53efc-2b5d-409e-a999-9504a96ff173-datadir" (OuterVolumeSpecName: "datadir") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.346218 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.348802 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-sa-token" (OuterVolumeSpecName: "sa-token") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.351623 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config" (OuterVolumeSpecName: "config") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.352407 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-metrics" (OuterVolumeSpecName: "metrics") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.355190 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.356224 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-kube-api-access-8ghbx" (OuterVolumeSpecName: "kube-api-access-8ghbx") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "kube-api-access-8ghbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.363510 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-token" (OuterVolumeSpecName: "collector-token") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.376781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b53efc-2b5d-409e-a999-9504a96ff173-tmp" (OuterVolumeSpecName: "tmp") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.381728 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-d6qd2"] Mar 12 16:48:19 crc kubenswrapper[4687]: E0312 16:48:19.382270 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b53efc-2b5d-409e-a999-9504a96ff173" containerName="collector" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.382291 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b53efc-2b5d-409e-a999-9504a96ff173" containerName="collector" Mar 12 16:48:19 crc kubenswrapper[4687]: E0312 16:48:19.382333 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" containerName="cluster-logging-operator" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.382340 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" containerName="cluster-logging-operator" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.382583 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" containerName="cluster-logging-operator" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.382616 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b53efc-2b5d-409e-a999-9504a96ff173" containerName="collector" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.383897 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.396764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-d6qd2"] Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.397809 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "07b53efc-2b5d-409e-a999-9504a96ff173" (UID: "07b53efc-2b5d-409e-a999-9504a96ff173"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.445704 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-trusted-ca\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447044 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-metrics\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447211 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2cc2c239-dfdc-448a-ad41-67de823ec204-datadir\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447261 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-collector-syslog-receiver\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-config\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447343 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-config-openshift-service-cacrt\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447421 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bgf\" (UniqueName: \"kubernetes.io/projected/2cc2c239-dfdc-448a-ad41-67de823ec204-kube-api-access-j5bgf\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447518 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc2c239-dfdc-448a-ad41-67de823ec204-tmp\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447549 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-entrypoint\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-collector-token\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2cc2c239-dfdc-448a-ad41-67de823ec204-sa-token\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447696 4687 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07b53efc-2b5d-409e-a999-9504a96ff173-datadir\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447707 4687 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447717 4687 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447727 4687 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07b53efc-2b5d-409e-a999-9504a96ff173-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447737 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ghbx\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-kube-api-access-8ghbx\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447745 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.447753 4687 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.452063 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.452113 4687 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07b53efc-2b5d-409e-a999-9504a96ff173-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.452126 4687 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07b53efc-2b5d-409e-a999-9504a96ff173-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.452136 4687 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b53efc-2b5d-409e-a999-9504a96ff173-tmp\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5bgf\" (UniqueName: \"kubernetes.io/projected/2cc2c239-dfdc-448a-ad41-67de823ec204-kube-api-access-j5bgf\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554820 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc2c239-dfdc-448a-ad41-67de823ec204-tmp\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554850 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-entrypoint\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-collector-token\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2cc2c239-dfdc-448a-ad41-67de823ec204-sa-token\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554916 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-trusted-ca\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.554933 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-metrics\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.555018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2cc2c239-dfdc-448a-ad41-67de823ec204-datadir\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.555046 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-collector-syslog-receiver\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.555082 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-config\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.555099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-config-openshift-service-cacrt\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.555994 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-config-openshift-service-cacrt\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.556865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/2cc2c239-dfdc-448a-ad41-67de823ec204-datadir\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.557642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-trusted-ca\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.558052 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-config\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.559987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/2cc2c239-dfdc-448a-ad41-67de823ec204-entrypoint\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.560751 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-collector-syslog-receiver\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.560915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-metrics\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.561441 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/2cc2c239-dfdc-448a-ad41-67de823ec204-collector-token\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.565873 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc2c239-dfdc-448a-ad41-67de823ec204-tmp\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.576874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/2cc2c239-dfdc-448a-ad41-67de823ec204-sa-token\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.582960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5bgf\" (UniqueName: \"kubernetes.io/projected/2cc2c239-dfdc-448a-ad41-67de823ec204-kube-api-access-j5bgf\") pod \"collector-d6qd2\" (UID: \"2cc2c239-dfdc-448a-ad41-67de823ec204\") " pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.709038 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-d6qd2" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.773658 4687 generic.go:334] "Generic (PLEG): container finished" podID="07b53efc-2b5d-409e-a999-9504a96ff173" containerID="a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6" exitCode=0 Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.773754 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5rr22" event={"ID":"07b53efc-2b5d-409e-a999-9504a96ff173","Type":"ContainerDied","Data":"a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6"} Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.773780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-5rr22" event={"ID":"07b53efc-2b5d-409e-a999-9504a96ff173","Type":"ContainerDied","Data":"7866a04618f8449a63fc1e7fb229fe2fac24661576f05de2ee732d3b0a80e675"} Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.773796 4687 scope.go:117] "RemoveContainer" containerID="a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.773952 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-5rr22" Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.782850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" event={"ID":"3cb4eb93-d3ba-469b-85fd-fe482c1b53ea","Type":"ContainerDied","Data":"b01c87782964e1fd11d3a43ecdde967c10e17c114f92cb6348a476bac191c76f"} Mar 12 16:48:19 crc kubenswrapper[4687]: I0312 16:48:19.782985 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-qpjwx" Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.024435 4687 scope.go:117] "RemoveContainer" containerID="a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6" Mar 12 16:48:20 crc kubenswrapper[4687]: E0312 16:48:20.046939 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6\": container with ID starting with a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6 not found: ID does not exist" containerID="a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6" Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.046978 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6"} err="failed to get container status \"a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6\": rpc error: code = NotFound desc = could not find container \"a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6\": container with ID starting with a16e51ed7c85f8766871ac6a750aceb38db755928375735fb58a068941c843e6 not found: ID does not exist" Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.047002 4687 scope.go:117] "RemoveContainer" containerID="79342d023bb622e2a292e72cdb115a099cd4d60a12d93d470ca86ca3efb86c2f" Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.087415 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-qpjwx"] Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.184720 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-qpjwx"] Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.247636 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-5rr22"] Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.257748 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-5rr22"] Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.508790 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-d6qd2"] Mar 12 16:48:20 crc kubenswrapper[4687]: W0312 16:48:20.528476 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc2c239_dfdc_448a_ad41_67de823ec204.slice/crio-6f37af5396514bc62709a7d6dbd438f4e2afb05ed81d648dc9673f95f1048154 WatchSource:0}: Error finding container 6f37af5396514bc62709a7d6dbd438f4e2afb05ed81d648dc9673f95f1048154: Status 404 returned error can't find the container with id 6f37af5396514bc62709a7d6dbd438f4e2afb05ed81d648dc9673f95f1048154 Mar 12 16:48:20 crc kubenswrapper[4687]: I0312 16:48:20.795280 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-d6qd2" event={"ID":"2cc2c239-dfdc-448a-ad41-67de823ec204","Type":"ContainerStarted","Data":"6f37af5396514bc62709a7d6dbd438f4e2afb05ed81d648dc9673f95f1048154"} Mar 12 16:48:21 crc kubenswrapper[4687]: I0312 16:48:21.749614 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b53efc-2b5d-409e-a999-9504a96ff173" path="/var/lib/kubelet/pods/07b53efc-2b5d-409e-a999-9504a96ff173/volumes" Mar 12 16:48:21 crc kubenswrapper[4687]: I0312 16:48:21.752695 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb4eb93-d3ba-469b-85fd-fe482c1b53ea" path="/var/lib/kubelet/pods/3cb4eb93-d3ba-469b-85fd-fe482c1b53ea/volumes" Mar 12 16:48:21 crc kubenswrapper[4687]: I0312 16:48:21.826032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" event={"ID":"ad1e9b15-9444-4099-8db5-3cab67383bf4","Type":"ContainerStarted","Data":"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8"} Mar 12 16:48:22 crc kubenswrapper[4687]: I0312 16:48:22.840939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" event={"ID":"ad1e9b15-9444-4099-8db5-3cab67383bf4","Type":"ContainerStarted","Data":"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101"} Mar 12 16:48:22 crc kubenswrapper[4687]: I0312 16:48:22.841444 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:22 crc kubenswrapper[4687]: I0312 16:48:22.869400 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" podStartSLOduration=5.026157039 podStartE2EDuration="8.869373177s" podCreationTimestamp="2026-03-12 16:48:14 +0000 UTC" firstStartedPulling="2026-03-12 16:48:17.635822055 +0000 UTC m=+2746.599784399" lastFinishedPulling="2026-03-12 16:48:21.479038193 +0000 UTC m=+2750.443000537" observedRunningTime="2026-03-12 16:48:22.862405167 +0000 UTC m=+2751.826367531" watchObservedRunningTime="2026-03-12 16:48:22.869373177 +0000 UTC m=+2751.833335521" Mar 12 16:48:28 crc kubenswrapper[4687]: I0312 16:48:28.939565 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-d6qd2" event={"ID":"2cc2c239-dfdc-448a-ad41-67de823ec204","Type":"ContainerStarted","Data":"415864facd779911deb3f4a96d495cc3bd1179d8aee40bbeeaf094a90012a08a"} Mar 12 16:48:28 crc kubenswrapper[4687]: I0312 16:48:28.963559 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-d6qd2" podStartSLOduration=2.877398849 podStartE2EDuration="9.963536557s" podCreationTimestamp="2026-03-12 16:48:19 +0000 UTC" firstStartedPulling="2026-03-12 16:48:20.534732208 +0000 UTC m=+2749.498694562" lastFinishedPulling="2026-03-12 16:48:27.620869926 +0000 UTC m=+2756.584832270" observedRunningTime="2026-03-12 16:48:28.957964545 +0000 UTC m=+2757.921926889" watchObservedRunningTime="2026-03-12 16:48:28.963536557 +0000 UTC m=+2757.927498901" Mar 12 16:48:33 crc kubenswrapper[4687]: I0312 16:48:33.863019 4687 scope.go:117] "RemoveContainer" containerID="df5c4edfd19adb4ba5e38fa774d6eb5ba06f70c2fc6381ccf8c0f9823f0c7f10" Mar 12 16:48:33 crc kubenswrapper[4687]: I0312 16:48:33.884934 4687 scope.go:117] "RemoveContainer" containerID="b9470fb53e6297cdfadda033e5562c842fdb9bcf8f75fcfd56b13b610889e485" Mar 12 16:48:33 crc kubenswrapper[4687]: I0312 16:48:33.919668 4687 scope.go:117] "RemoveContainer" containerID="2e4d56c45d377e2deae276aea6b88ede42474dfd391e05fe9ea3c543b8a21a52" Mar 12 16:48:33 crc kubenswrapper[4687]: I0312 16:48:33.968405 4687 scope.go:117] "RemoveContainer" containerID="40e11b2146e12c519d8491b61bb4f86e4e112fbde6842df71864f1dfa124a862" Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.086339 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.178973 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f"] Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.179725 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="manager" containerID="cri-o://ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217" gracePeriod=10 Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.179783 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="kube-rbac-proxy" containerID="cri-o://8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6" gracePeriod=10 Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.869385 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.929398 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-apiservice-cert\") pod \"1206243a-8728-44ca-9858-672a27817783\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.929537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdddq\" (UniqueName: \"kubernetes.io/projected/1206243a-8728-44ca-9858-672a27817783-kube-api-access-bdddq\") pod \"1206243a-8728-44ca-9858-672a27817783\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.929612 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-loki-operator-metrics-cert\") pod \"1206243a-8728-44ca-9858-672a27817783\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.929653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-webhook-cert\") pod \"1206243a-8728-44ca-9858-672a27817783\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.929805 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1206243a-8728-44ca-9858-672a27817783-manager-config\") pod \"1206243a-8728-44ca-9858-672a27817783\" (UID: \"1206243a-8728-44ca-9858-672a27817783\") " Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.959132 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "1206243a-8728-44ca-9858-672a27817783" (UID: "1206243a-8728-44ca-9858-672a27817783"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.959967 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1206243a-8728-44ca-9858-672a27817783" (UID: "1206243a-8728-44ca-9858-672a27817783"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.960293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1206243a-8728-44ca-9858-672a27817783-kube-api-access-bdddq" (OuterVolumeSpecName: "kube-api-access-bdddq") pod "1206243a-8728-44ca-9858-672a27817783" (UID: "1206243a-8728-44ca-9858-672a27817783"). InnerVolumeSpecName "kube-api-access-bdddq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:48:35 crc kubenswrapper[4687]: I0312 16:48:35.971049 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-loki-operator-metrics-cert" (OuterVolumeSpecName: "loki-operator-metrics-cert") pod "1206243a-8728-44ca-9858-672a27817783" (UID: "1206243a-8728-44ca-9858-672a27817783"). InnerVolumeSpecName "loki-operator-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.032329 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.032901 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdddq\" (UniqueName: \"kubernetes.io/projected/1206243a-8728-44ca-9858-672a27817783-kube-api-access-bdddq\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.032920 4687 reconciler_common.go:293] "Volume detached for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-loki-operator-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.032931 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1206243a-8728-44ca-9858-672a27817783-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.036649 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1206243a-8728-44ca-9858-672a27817783-manager-config" (OuterVolumeSpecName: "manager-config") pod "1206243a-8728-44ca-9858-672a27817783" (UID: "1206243a-8728-44ca-9858-672a27817783"). InnerVolumeSpecName "manager-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.046709 4687 generic.go:334] "Generic (PLEG): container finished" podID="1206243a-8728-44ca-9858-672a27817783" containerID="8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6" exitCode=0 Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.046745 4687 generic.go:334] "Generic (PLEG): container finished" podID="1206243a-8728-44ca-9858-672a27817783" containerID="ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217" exitCode=0 Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.046768 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" event={"ID":"1206243a-8728-44ca-9858-672a27817783","Type":"ContainerDied","Data":"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6"} Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.046794 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" event={"ID":"1206243a-8728-44ca-9858-672a27817783","Type":"ContainerDied","Data":"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217"} Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.046804 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" event={"ID":"1206243a-8728-44ca-9858-672a27817783","Type":"ContainerDied","Data":"7c8b8228c7a04c01cdf4016050131a8d663faed9255fa4b3ca8f3b80a4bd9388"} Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.046824 4687 scope.go:117] "RemoveContainer" containerID="8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.047008 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.138912 4687 reconciler_common.go:293] "Volume detached for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/1206243a-8728-44ca-9858-672a27817783-manager-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.144723 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f"] Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.148465 4687 scope.go:117] "RemoveContainer" containerID="ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.154534 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7f5dc69449-f5b8f"] Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.180989 4687 scope.go:117] "RemoveContainer" containerID="8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6" Mar 12 16:48:36 crc kubenswrapper[4687]: E0312 16:48:36.183493 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6\": container with ID starting with 8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6 not found: ID does not exist" containerID="8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.183523 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6"} err="failed to get container status \"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6\": rpc error: code = NotFound desc = could not find container \"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6\": container with ID starting with 8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6 not found: ID does not exist" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.183543 4687 scope.go:117] "RemoveContainer" containerID="ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217" Mar 12 16:48:36 crc kubenswrapper[4687]: E0312 16:48:36.184159 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217\": container with ID starting with ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217 not found: ID does not exist" containerID="ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.184203 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217"} err="failed to get container status \"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217\": rpc error: code = NotFound desc = could not find container \"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217\": container with ID starting with ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217 not found: ID does not exist" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.184228 4687 scope.go:117] "RemoveContainer" containerID="8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.184435 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6"} err="failed to get container status \"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6\": rpc error: code = NotFound desc = could not find container \"8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6\": container with ID starting with 8559ed6be0efdac7baba5b895836d66d6d8a13fdea0c9b9ef20b5e4410148db6 not found: ID does not exist" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.184459 4687 scope.go:117] "RemoveContainer" containerID="ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217" Mar 12 16:48:36 crc kubenswrapper[4687]: I0312 16:48:36.184621 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217"} err="failed to get container status \"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217\": rpc error: code = NotFound desc = could not find container \"ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217\": container with ID starting with ce5495a261a8a939b95a056970f43c4b60580cedffe47e7523101ecf631e1217 not found: ID does not exist" Mar 12 16:48:37 crc kubenswrapper[4687]: I0312 16:48:37.748616 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1206243a-8728-44ca-9858-672a27817783" path="/var/lib/kubelet/pods/1206243a-8728-44ca-9858-672a27817783/volumes" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.186796 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv"] Mar 12 16:48:42 crc kubenswrapper[4687]: E0312 16:48:42.187825 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="manager" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.187839 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="manager" Mar 12 16:48:42 crc kubenswrapper[4687]: E0312 16:48:42.187857 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="kube-rbac-proxy" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.187865 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="kube-rbac-proxy" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.188100 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="manager" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.188119 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1206243a-8728-44ca-9858-672a27817783" containerName="kube-rbac-proxy" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.189327 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.197901 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv"] Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.291661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-apiservice-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.291779 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4568a6b6-c008-4ea7-abec-b824324732d3-manager-config\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.292068 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc855\" (UniqueName: \"kubernetes.io/projected/4568a6b6-c008-4ea7-abec-b824324732d3-kube-api-access-pc855\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.292211 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-webhook-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.292300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.394290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc855\" (UniqueName: \"kubernetes.io/projected/4568a6b6-c008-4ea7-abec-b824324732d3-kube-api-access-pc855\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.394409 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-webhook-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.394466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.394618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-apiservice-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.394678 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4568a6b6-c008-4ea7-abec-b824324732d3-manager-config\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.395838 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4568a6b6-c008-4ea7-abec-b824324732d3-manager-config\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.405712 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-webhook-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.415065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.415429 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc855\" (UniqueName: \"kubernetes.io/projected/4568a6b6-c008-4ea7-abec-b824324732d3-kube-api-access-pc855\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.416123 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4568a6b6-c008-4ea7-abec-b824324732d3-apiservice-cert\") pod \"loki-operator-controller-manager-56bfd9f789-6lvcv\" (UID: \"4568a6b6-c008-4ea7-abec-b824324732d3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.512468 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:42 crc kubenswrapper[4687]: I0312 16:48:42.984079 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv"] Mar 12 16:48:42 crc kubenswrapper[4687]: W0312 16:48:42.986177 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4568a6b6_c008_4ea7_abec_b824324732d3.slice/crio-78ead2ac51bee03ef22249baee4885c6f5f6d3aa95810ff467ce7c4f5c3a9a6e WatchSource:0}: Error finding container 78ead2ac51bee03ef22249baee4885c6f5f6d3aa95810ff467ce7c4f5c3a9a6e: Status 404 returned error can't find the container with id 78ead2ac51bee03ef22249baee4885c6f5f6d3aa95810ff467ce7c4f5c3a9a6e Mar 12 16:48:43 crc kubenswrapper[4687]: I0312 16:48:43.120242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" event={"ID":"4568a6b6-c008-4ea7-abec-b824324732d3","Type":"ContainerStarted","Data":"78ead2ac51bee03ef22249baee4885c6f5f6d3aa95810ff467ce7c4f5c3a9a6e"} Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.121185 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.121711 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.121756 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.122610 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46abef41881c9a6f28882d6a10c8f0678dc2602721fd1e31fad6112aecd70e2a"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.122658 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://46abef41881c9a6f28882d6a10c8f0678dc2602721fd1e31fad6112aecd70e2a" gracePeriod=600 Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.133833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" event={"ID":"4568a6b6-c008-4ea7-abec-b824324732d3","Type":"ContainerStarted","Data":"967ee0999744337d3c776bbf5b844d5563a6fcd828db8e4d322edaa754912264"} Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.133876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" event={"ID":"4568a6b6-c008-4ea7-abec-b824324732d3","Type":"ContainerStarted","Data":"29aece1dd0a74de50c22be1e7df767e9f8f6d6ffc185144dd9aa6a81a345b34b"} Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.134009 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:44 crc kubenswrapper[4687]: I0312 16:48:44.167206 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podStartSLOduration=2.167186652 podStartE2EDuration="2.167186652s" podCreationTimestamp="2026-03-12 16:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:48:44.161220329 +0000 UTC m=+2773.125182673" watchObservedRunningTime="2026-03-12 16:48:44.167186652 +0000 UTC m=+2773.131149006" Mar 12 16:48:45 crc kubenswrapper[4687]: I0312 16:48:45.148010 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="46abef41881c9a6f28882d6a10c8f0678dc2602721fd1e31fad6112aecd70e2a" exitCode=0 Mar 12 16:48:45 crc kubenswrapper[4687]: I0312 16:48:45.148087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"46abef41881c9a6f28882d6a10c8f0678dc2602721fd1e31fad6112aecd70e2a"} Mar 12 16:48:45 crc kubenswrapper[4687]: I0312 16:48:45.148651 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65"} Mar 12 16:48:45 crc kubenswrapper[4687]: I0312 16:48:45.148689 4687 scope.go:117] "RemoveContainer" containerID="b462fbb6c33479f65e0d4e298217e0a59c8b66d59ae0e322b64e6ad03daa271a" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.722334 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-8k59p"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.725667 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.785660 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-8k59p"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.804855 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.805062 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" containerID="cri-o://b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1" gracePeriod=30 Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.825223 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.827330 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.840287 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.840518 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-compactor-0" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" containerID="cri-o://21c54ee8fa5a1723869ed6bd1240c2446db2a05934d7e0f6a51b4fb501e35f88" gracePeriod=30 Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.856483 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.874817 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.874942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875107 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzkd\" (UniqueName: \"kubernetes.io/projected/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-kube-api-access-xhzkd\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875126 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-config\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-config\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875295 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmhw\" (UniqueName: \"kubernetes.io/projected/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-kube-api-access-nlmhw\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.875495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.885375 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.887225 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.918720 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.973889 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.974343 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-index-gateway-0" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" containerID="cri-o://31a92a22b1d3049f36d2932f00ae91d186801ebf59c384e2d4af16f88ec29366" gracePeriod=30 Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.978624 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.978721 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.978755 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac17b136-46f1-4129-a22f-bcd3baaf7813-config\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.978955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzkd\" (UniqueName: \"kubernetes.io/projected/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-kube-api-access-xhzkd\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979036 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvf6\" (UniqueName: \"kubernetes.io/projected/ac17b136-46f1-4129-a22f-bcd3baaf7813-kube-api-access-5fvf6\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979108 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-config\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979161 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-config\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979268 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979332 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlmhw\" (UniqueName: \"kubernetes.io/projected/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-kube-api-access-nlmhw\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.979495 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.980467 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.980775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.980931 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.981233 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-config\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.981711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-config\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.981993 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.986816 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.987014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.987299 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.990924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:51 crc kubenswrapper[4687]: I0312 16:48:51.994339 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.006333 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzkd\" (UniqueName: \"kubernetes.io/projected/33969389-2dd2-4c4b-ae70-d6e71f0fdf14-kube-api-access-xhzkd\") pod \"logging-loki-distributor-9c6b6d984-8k59p\" (UID: \"33969389-2dd2-4c4b-ae70-d6e71f0fdf14\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.007699 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlmhw\" (UniqueName: \"kubernetes.io/projected/bc2003f0-4f8e-4e59-8a1a-dd7be452b232-kube-api-access-nlmhw\") pod \"logging-loki-querier-6dcbdf8bb8-6zsz9\" (UID: \"bc2003f0-4f8e-4e59-8a1a-dd7be452b232\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.012984 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk"] Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.016593 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.024153 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk"] Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.061621 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.086766 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.086848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac17b136-46f1-4129-a22f-bcd3baaf7813-config\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.086984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvf6\" (UniqueName: \"kubernetes.io/projected/ac17b136-46f1-4129-a22f-bcd3baaf7813-kube-api-access-5fvf6\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.087159 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.087293 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.090210 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.090767 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac17b136-46f1-4129-a22f-bcd3baaf7813-config\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.111000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.124181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ac17b136-46f1-4129-a22f-bcd3baaf7813-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.142232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvf6\" (UniqueName: \"kubernetes.io/projected/ac17b136-46f1-4129-a22f-bcd3baaf7813-kube-api-access-5fvf6\") pod \"logging-loki-query-frontend-ff66c4dc9-bxc6s\" (UID: \"ac17b136-46f1-4129-a22f-bcd3baaf7813\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.151814 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.211230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-tls-secret\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.228432 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7xc\" (UniqueName: \"kubernetes.io/projected/75324694-6fad-4d26-8415-9d7f55ab5c1d-kube-api-access-hq7xc\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.256348 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.259861 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.259911 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.260029 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-lokistack-gateway\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.260083 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-tenants\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.260152 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.260224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-rbac\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.362831 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363157 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-rbac\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363247 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-tls-secret\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363292 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7xc\" (UniqueName: \"kubernetes.io/projected/75324694-6fad-4d26-8415-9d7f55ab5c1d-kube-api-access-hq7xc\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363368 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363391 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363437 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-lokistack-gateway\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.363462 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-tenants\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.366235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.366252 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.366445 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-lokistack-gateway\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.366600 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/75324694-6fad-4d26-8415-9d7f55ab5c1d-rbac\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.369489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-tenants\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.369512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.369910 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/75324694-6fad-4d26-8415-9d7f55ab5c1d-tls-secret\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.381887 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7xc\" (UniqueName: \"kubernetes.io/projected/75324694-6fad-4d26-8415-9d7f55ab5c1d-kube-api-access-hq7xc\") pod \"logging-loki-gateway-54f8b9b48b-cfshk\" (UID: \"75324694-6fad-4d26-8415-9d7f55ab5c1d\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.515327 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.590172 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks"] Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.590398 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="manager" containerID="cri-o://ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8" gracePeriod=10 Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.590583 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="kube-rbac-proxy" containerID="cri-o://c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101" gracePeriod=10 Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.676885 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.876206 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-8k59p"] Mar 12 16:48:52 crc kubenswrapper[4687]: I0312 16:48:52.893858 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9"] Mar 12 16:48:53 crc kubenswrapper[4687]: W0312 16:48:53.058187 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac17b136_46f1_4129_a22f_bcd3baaf7813.slice/crio-f0955d91a002330224040bd5ba45619e70a4d6f856aa8f6d286b723e5cd246c9 WatchSource:0}: Error finding container f0955d91a002330224040bd5ba45619e70a4d6f856aa8f6d286b723e5cd246c9: Status 404 returned error can't find the container with id f0955d91a002330224040bd5ba45619e70a4d6f856aa8f6d286b723e5cd246c9 Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.060294 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s"] Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.294266 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.314693 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk"] Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387466 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerID="c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101" exitCode=0 Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387503 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerID="ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8" exitCode=0 Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387553 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" event={"ID":"ad1e9b15-9444-4099-8db5-3cab67383bf4","Type":"ContainerDied","Data":"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" event={"ID":"ad1e9b15-9444-4099-8db5-3cab67383bf4","Type":"ContainerDied","Data":"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" event={"ID":"ad1e9b15-9444-4099-8db5-3cab67383bf4","Type":"ContainerDied","Data":"b0ac81c4d3587e9cba8816fa1d3642eb0a17165cacfc64ed90c44ebae17b52d5"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387621 4687 scope.go:117] "RemoveContainer" containerID="c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.387780 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.393340 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" event={"ID":"75324694-6fad-4d26-8415-9d7f55ab5c1d","Type":"ContainerStarted","Data":"07f9407f4606b22dd1f5c0e86ce10bebc55fdca7f4b24615053bb40e9a16c6b6"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.394851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" event={"ID":"33969389-2dd2-4c4b-ae70-d6e71f0fdf14","Type":"ContainerStarted","Data":"e80e37063d6a0c9c0cd455e665e26a2526e48c11e7adda11121b907158f4ac1d"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.395343 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-webhook-cert\") pod \"ad1e9b15-9444-4099-8db5-3cab67383bf4\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.395418 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-apiservice-cert\") pod \"ad1e9b15-9444-4099-8db5-3cab67383bf4\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.395755 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92k8\" (UniqueName: \"kubernetes.io/projected/ad1e9b15-9444-4099-8db5-3cab67383bf4-kube-api-access-v92k8\") pod \"ad1e9b15-9444-4099-8db5-3cab67383bf4\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.395790 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ad1e9b15-9444-4099-8db5-3cab67383bf4-manager-config\") pod \"ad1e9b15-9444-4099-8db5-3cab67383bf4\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.395891 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-loki-operator-metrics-cert\") pod \"ad1e9b15-9444-4099-8db5-3cab67383bf4\" (UID: \"ad1e9b15-9444-4099-8db5-3cab67383bf4\") " Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.396018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" event={"ID":"ac17b136-46f1-4129-a22f-bcd3baaf7813","Type":"ContainerStarted","Data":"f0955d91a002330224040bd5ba45619e70a4d6f856aa8f6d286b723e5cd246c9"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.398048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" event={"ID":"bc2003f0-4f8e-4e59-8a1a-dd7be452b232","Type":"ContainerStarted","Data":"cca0154c5af32b1d7f5699676f5b7347d57ad76756f6cc5aa18b8feaf75fa2f6"} Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.402046 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-loki-operator-metrics-cert" (OuterVolumeSpecName: "loki-operator-metrics-cert") pod "ad1e9b15-9444-4099-8db5-3cab67383bf4" (UID: "ad1e9b15-9444-4099-8db5-3cab67383bf4"). InnerVolumeSpecName "loki-operator-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.402088 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "ad1e9b15-9444-4099-8db5-3cab67383bf4" (UID: "ad1e9b15-9444-4099-8db5-3cab67383bf4"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.402179 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ad1e9b15-9444-4099-8db5-3cab67383bf4" (UID: "ad1e9b15-9444-4099-8db5-3cab67383bf4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.402913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1e9b15-9444-4099-8db5-3cab67383bf4-kube-api-access-v92k8" (OuterVolumeSpecName: "kube-api-access-v92k8") pod "ad1e9b15-9444-4099-8db5-3cab67383bf4" (UID: "ad1e9b15-9444-4099-8db5-3cab67383bf4"). InnerVolumeSpecName "kube-api-access-v92k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.420385 4687 scope.go:117] "RemoveContainer" containerID="ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.430393 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad1e9b15-9444-4099-8db5-3cab67383bf4-manager-config" (OuterVolumeSpecName: "manager-config") pod "ad1e9b15-9444-4099-8db5-3cab67383bf4" (UID: "ad1e9b15-9444-4099-8db5-3cab67383bf4"). InnerVolumeSpecName "manager-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.450263 4687 scope.go:117] "RemoveContainer" containerID="c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101" Mar 12 16:48:53 crc kubenswrapper[4687]: E0312 16:48:53.450954 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101\": container with ID starting with c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101 not found: ID does not exist" containerID="c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.451001 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101"} err="failed to get container status \"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101\": rpc error: code = NotFound desc = could not find container \"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101\": container with ID starting with c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101 not found: ID does not exist" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.451031 4687 scope.go:117] "RemoveContainer" containerID="ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8" Mar 12 16:48:53 crc kubenswrapper[4687]: E0312 16:48:53.451465 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8\": container with ID starting with ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8 not found: ID does not exist" containerID="ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.451500 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8"} err="failed to get container status \"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8\": rpc error: code = NotFound desc = could not find container \"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8\": container with ID starting with ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8 not found: ID does not exist" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.451534 4687 scope.go:117] "RemoveContainer" containerID="c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.451811 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101"} err="failed to get container status \"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101\": rpc error: code = NotFound desc = could not find container \"c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101\": container with ID starting with c508b1c6162fc752ed792d2c3b9b48d9489a2eb5f65b4ac62bf6e9b15cd98101 not found: ID does not exist" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.452172 4687 scope.go:117] "RemoveContainer" containerID="ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.452556 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8"} err="failed to get container status \"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8\": rpc error: code = NotFound desc = could not find container \"ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8\": container with ID starting with ff562037481998674abad18cdca77ef082dc9f1ac4aff559c0b77decf5deefa8 not found: ID does not exist" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.499766 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v92k8\" (UniqueName: \"kubernetes.io/projected/ad1e9b15-9444-4099-8db5-3cab67383bf4-kube-api-access-v92k8\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.499813 4687 reconciler_common.go:293] "Volume detached for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ad1e9b15-9444-4099-8db5-3cab67383bf4-manager-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.499825 4687 reconciler_common.go:293] "Volume detached for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-loki-operator-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.499837 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.499854 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad1e9b15-9444-4099-8db5-3cab67383bf4-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.725657 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks"] Mar 12 16:48:53 crc kubenswrapper[4687]: I0312 16:48:53.745332 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-649d5ff64d-9zcks"] Mar 12 16:48:55 crc kubenswrapper[4687]: I0312 16:48:55.754158 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" path="/var/lib/kubelet/pods/ad1e9b15-9444-4099-8db5-3cab67383bf4/volumes" Mar 12 16:48:56 crc kubenswrapper[4687]: I0312 16:48:56.680129 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:48:56 crc kubenswrapper[4687]: I0312 16:48:56.680602 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:48:56 crc kubenswrapper[4687]: I0312 16:48:56.756209 4687 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:48:56 crc kubenswrapper[4687]: I0312 16:48:56.756267 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:48:56 crc kubenswrapper[4687]: I0312 16:48:56.815148 4687 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:48:56 crc kubenswrapper[4687]: I0312 16:48:56.815212 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.468232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" event={"ID":"ac17b136-46f1-4129-a22f-bcd3baaf7813","Type":"ContainerStarted","Data":"70226c0c34521b5630ca849b18473f459d87017042ab510a8fae35c7fdff7d07"} Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.468746 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.469614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" event={"ID":"bc2003f0-4f8e-4e59-8a1a-dd7be452b232","Type":"ContainerStarted","Data":"2922264e7af4ce1c2511c54d1fcfe5330bedba3fa3190bd5b7f689af5f4cc6b4"} Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.469734 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.471143 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" event={"ID":"75324694-6fad-4d26-8415-9d7f55ab5c1d","Type":"ContainerStarted","Data":"1e57cad50e9a109a5662245910dbf74484eb4daa3f033b1439bf701fecc6e154"} Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.472279 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" event={"ID":"33969389-2dd2-4c4b-ae70-d6e71f0fdf14","Type":"ContainerStarted","Data":"704161cc53dda7670e3751e87ac103f51a2e5994465f24e218f393cd476bf167"} Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.472464 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.484926 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" podStartSLOduration=3.2767093369999998 podStartE2EDuration="8.484902996s" podCreationTimestamp="2026-03-12 16:48:51 +0000 UTC" firstStartedPulling="2026-03-12 16:48:53.06102808 +0000 UTC m=+2782.024990424" lastFinishedPulling="2026-03-12 16:48:58.269221739 +0000 UTC m=+2787.233184083" observedRunningTime="2026-03-12 16:48:59.482075969 +0000 UTC m=+2788.446038313" watchObservedRunningTime="2026-03-12 16:48:59.484902996 +0000 UTC m=+2788.448865340" Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.509954 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podStartSLOduration=3.108282553 podStartE2EDuration="8.509934291s" podCreationTimestamp="2026-03-12 16:48:51 +0000 UTC" firstStartedPulling="2026-03-12 16:48:52.869327838 +0000 UTC m=+2781.833290182" lastFinishedPulling="2026-03-12 16:48:58.270979576 +0000 UTC m=+2787.234941920" observedRunningTime="2026-03-12 16:48:59.504125082 +0000 UTC m=+2788.468087426" watchObservedRunningTime="2026-03-12 16:48:59.509934291 +0000 UTC m=+2788.473896635" Mar 12 16:48:59 crc kubenswrapper[4687]: I0312 16:48:59.528850 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podStartSLOduration=3.130734295 podStartE2EDuration="8.528828918s" podCreationTimestamp="2026-03-12 16:48:51 +0000 UTC" firstStartedPulling="2026-03-12 16:48:52.870128839 +0000 UTC m=+2781.834091183" lastFinishedPulling="2026-03-12 16:48:58.268223462 +0000 UTC m=+2787.232185806" observedRunningTime="2026-03-12 16:48:59.528109798 +0000 UTC m=+2788.492072152" watchObservedRunningTime="2026-03-12 16:48:59.528828918 +0000 UTC m=+2788.492791282" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.504100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" event={"ID":"75324694-6fad-4d26-8415-9d7f55ab5c1d","Type":"ContainerStarted","Data":"da86e9f9c53609b569af424748a845dfba7943355b68474d71f35813015bec46"} Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.505483 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.505503 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.516433 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.516761 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.543083 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podStartSLOduration=2.726882651 podStartE2EDuration="10.543055875s" podCreationTimestamp="2026-03-12 16:48:51 +0000 UTC" firstStartedPulling="2026-03-12 16:48:53.306763361 +0000 UTC m=+2782.270725705" lastFinishedPulling="2026-03-12 16:49:01.122936585 +0000 UTC m=+2790.086898929" observedRunningTime="2026-03-12 16:49:01.528040555 +0000 UTC m=+2790.492002909" watchObservedRunningTime="2026-03-12 16:49:01.543055875 +0000 UTC m=+2790.507018259" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.606060 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-t74nx"] Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.612570 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="gateway" containerID="cri-o://46922afdc067616b14f9e4d1a062ba92d26923e76bf3fb7dc392a5e3f33e8491" gracePeriod=30 Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.612731 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="opa" containerID="cri-o://ecb4d9ff91e7fc2a4750decb4519b998ca16fc9f1e60617799f11234a403ccf5" gracePeriod=30 Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.647570 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf"] Mar 12 16:49:01 crc kubenswrapper[4687]: E0312 16:49:01.648248 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="kube-rbac-proxy" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.648272 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="kube-rbac-proxy" Mar 12 16:49:01 crc kubenswrapper[4687]: E0312 16:49:01.648308 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="manager" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.648319 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="manager" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.648658 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="manager" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.648702 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1e9b15-9444-4099-8db5-3cab67383bf4" containerName="kube-rbac-proxy" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.650539 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.670550 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf"] Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.843935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-rbac\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-lokistack-gateway\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844339 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48csw\" (UniqueName: \"kubernetes.io/projected/7b24457c-bd41-4df3-95a1-10b69540a4af-kube-api-access-48csw\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844453 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844510 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-tls-secret\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-tenants\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.844672 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.946711 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.947520 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.947622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-rbac\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.947657 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-lokistack-gateway\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.947703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48csw\" (UniqueName: \"kubernetes.io/projected/7b24457c-bd41-4df3-95a1-10b69540a4af-kube-api-access-48csw\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.948443 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-lokistack-gateway\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.948528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-rbac\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.948562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.948755 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-tls-secret\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.948855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-tenants\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.949016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.950108 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.956993 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-tenants\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.959014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-tls-secret\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.982002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7b24457c-bd41-4df3-95a1-10b69540a4af-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:01 crc kubenswrapper[4687]: I0312 16:49:01.988195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48csw\" (UniqueName: \"kubernetes.io/projected/7b24457c-bd41-4df3-95a1-10b69540a4af-kube-api-access-48csw\") pod \"logging-loki-gateway-54f8b9b48b-87lbf\" (UID: \"7b24457c-bd41-4df3-95a1-10b69540a4af\") " pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:02 crc kubenswrapper[4687]: I0312 16:49:02.278169 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:02 crc kubenswrapper[4687]: I0312 16:49:02.515967 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b4762e3-e056-423a-b130-487353e771ed" containerID="ecb4d9ff91e7fc2a4750decb4519b998ca16fc9f1e60617799f11234a403ccf5" exitCode=0 Mar 12 16:49:02 crc kubenswrapper[4687]: I0312 16:49:02.516051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" event={"ID":"0b4762e3-e056-423a-b130-487353e771ed","Type":"ContainerDied","Data":"ecb4d9ff91e7fc2a4750decb4519b998ca16fc9f1e60617799f11234a403ccf5"} Mar 12 16:49:02 crc kubenswrapper[4687]: I0312 16:49:02.787177 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf"] Mar 12 16:49:03 crc kubenswrapper[4687]: I0312 16:49:03.532526 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b4762e3-e056-423a-b130-487353e771ed" containerID="46922afdc067616b14f9e4d1a062ba92d26923e76bf3fb7dc392a5e3f33e8491" exitCode=0 Mar 12 16:49:03 crc kubenswrapper[4687]: I0312 16:49:03.532609 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" event={"ID":"0b4762e3-e056-423a-b130-487353e771ed","Type":"ContainerDied","Data":"46922afdc067616b14f9e4d1a062ba92d26923e76bf3fb7dc392a5e3f33e8491"} Mar 12 16:49:03 crc kubenswrapper[4687]: I0312 16:49:03.537113 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" event={"ID":"7b24457c-bd41-4df3-95a1-10b69540a4af","Type":"ContainerStarted","Data":"a86e861ddc0d31f224eb26f23b56112804f0842d2aeeceeb374e9cd7bece3e45"} Mar 12 16:49:03 crc kubenswrapper[4687]: I0312 16:49:03.537159 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" event={"ID":"7b24457c-bd41-4df3-95a1-10b69540a4af","Type":"ContainerStarted","Data":"02c339a54f19b842ba3ef0a42eb033a912cf51d0debce00c09b2ee676311b2ef"} Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.036670 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.087892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-rbac\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.087982 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tenants\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.088063 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-ca-bundle\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.088138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-ca-bundle\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.088210 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-client-http\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.088261 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.088319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwqmr\" (UniqueName: \"kubernetes.io/projected/0b4762e3-e056-423a-b130-487353e771ed-kube-api-access-vwqmr\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.088527 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-lokistack-gateway\") pod \"0b4762e3-e056-423a-b130-487353e771ed\" (UID: \"0b4762e3-e056-423a-b130-487353e771ed\") " Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.090543 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.091537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-ca-bundle" (OuterVolumeSpecName: "logging-loki-gateway-ca-bundle") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "logging-loki-gateway-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.095634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4762e3-e056-423a-b130-487353e771ed-kube-api-access-vwqmr" (OuterVolumeSpecName: "kube-api-access-vwqmr") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "kube-api-access-vwqmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.096469 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret" (OuterVolumeSpecName: "tls-secret") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "tls-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.124642 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-client-http" (OuterVolumeSpecName: "logging-loki-gateway-client-http") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "logging-loki-gateway-client-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.128060 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-lokistack-gateway" (OuterVolumeSpecName: "lokistack-gateway") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "lokistack-gateway". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.143552 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tenants" (OuterVolumeSpecName: "tenants") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "tenants". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.164820 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-rbac" (OuterVolumeSpecName: "rbac") pod "0b4762e3-e056-423a-b130-487353e771ed" (UID: "0b4762e3-e056-423a-b130-487353e771ed"). InnerVolumeSpecName "rbac". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191047 4687 reconciler_common.go:293] "Volume detached for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tenants\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191079 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191089 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191105 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-logging-loki-gateway-client-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191115 4687 reconciler_common.go:293] "Volume detached for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b4762e3-e056-423a-b130-487353e771ed-tls-secret\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191128 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwqmr\" (UniqueName: \"kubernetes.io/projected/0b4762e3-e056-423a-b130-487353e771ed-kube-api-access-vwqmr\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191139 4687 reconciler_common.go:293] "Volume detached for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-lokistack-gateway\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.191151 4687 reconciler_common.go:293] "Volume detached for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b4762e3-e056-423a-b130-487353e771ed-rbac\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.553121 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" event={"ID":"7b24457c-bd41-4df3-95a1-10b69540a4af","Type":"ContainerStarted","Data":"6332caa682e9f4a48ee4bd3bf5de463b43ee22f8b57ec176a88bd62b569c750a"} Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.553788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.553844 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.555666 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" event={"ID":"0b4762e3-e056-423a-b130-487353e771ed","Type":"ContainerDied","Data":"10a4688857d37f962fd538f1657aa44cd5faef09cd452093c1f0dd6e25ae5811"} Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.555717 4687 scope.go:117] "RemoveContainer" containerID="ecb4d9ff91e7fc2a4750decb4519b998ca16fc9f1e60617799f11234a403ccf5" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.555751 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-t74nx" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.566989 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.573244 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.586023 4687 scope.go:117] "RemoveContainer" containerID="46922afdc067616b14f9e4d1a062ba92d26923e76bf3fb7dc392a5e3f33e8491" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.591707 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podStartSLOduration=3.591688382 podStartE2EDuration="3.591688382s" podCreationTimestamp="2026-03-12 16:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:49:04.575457198 +0000 UTC m=+2793.539419572" watchObservedRunningTime="2026-03-12 16:49:04.591688382 +0000 UTC m=+2793.555650726" Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.663651 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-t74nx"] Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.683610 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-t74nx"] Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.701382 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-9cpdv"] Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.701856 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="gateway" containerID="cri-o://e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce" gracePeriod=30 Mar 12 16:49:04 crc kubenswrapper[4687]: I0312 16:49:04.702305 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="opa" containerID="cri-o://d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836" gracePeriod=30 Mar 12 16:49:05 crc kubenswrapper[4687]: I0312 16:49:05.571045 4687 generic.go:334] "Generic (PLEG): container finished" podID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerID="d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836" exitCode=0 Mar 12 16:49:05 crc kubenswrapper[4687]: I0312 16:49:05.571124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" event={"ID":"14c9be95-e47d-4358-9b75-ab8188aeff38","Type":"ContainerDied","Data":"d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836"} Mar 12 16:49:05 crc kubenswrapper[4687]: I0312 16:49:05.747485 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4762e3-e056-423a-b130-487353e771ed" path="/var/lib/kubelet/pods/0b4762e3-e056-423a-b130-487353e771ed/volumes" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.247067 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.342950 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-client-http\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343008 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tenants\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343159 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8g2c\" (UniqueName: \"kubernetes.io/projected/14c9be95-e47d-4358-9b75-ab8188aeff38-kube-api-access-x8g2c\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343195 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-ca-bundle\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343278 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-lokistack-gateway\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343406 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343464 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-rbac\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343505 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-ca-bundle\") pod \"14c9be95-e47d-4358-9b75-ab8188aeff38\" (UID: \"14c9be95-e47d-4358-9b75-ab8188aeff38\") " Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.343988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-ca-bundle" (OuterVolumeSpecName: "logging-loki-gateway-ca-bundle") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "logging-loki-gateway-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.344091 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.344772 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.344799 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.356744 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c9be95-e47d-4358-9b75-ab8188aeff38-kube-api-access-x8g2c" (OuterVolumeSpecName: "kube-api-access-x8g2c") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "kube-api-access-x8g2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.357234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-client-http" (OuterVolumeSpecName: "logging-loki-gateway-client-http") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "logging-loki-gateway-client-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.366999 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret" (OuterVolumeSpecName: "tls-secret") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "tls-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.377820 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-lokistack-gateway" (OuterVolumeSpecName: "lokistack-gateway") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "lokistack-gateway". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.378312 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-rbac" (OuterVolumeSpecName: "rbac") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "rbac". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.396384 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tenants" (OuterVolumeSpecName: "tenants") pod "14c9be95-e47d-4358-9b75-ab8188aeff38" (UID: "14c9be95-e47d-4358-9b75-ab8188aeff38"). InnerVolumeSpecName "tenants". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.447406 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-logging-loki-gateway-client-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.447451 4687 reconciler_common.go:293] "Volume detached for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tenants\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.447464 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8g2c\" (UniqueName: \"kubernetes.io/projected/14c9be95-e47d-4358-9b75-ab8188aeff38-kube-api-access-x8g2c\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.447477 4687 reconciler_common.go:293] "Volume detached for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-lokistack-gateway\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.447488 4687 reconciler_common.go:293] "Volume detached for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/14c9be95-e47d-4358-9b75-ab8188aeff38-tls-secret\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.447498 4687 reconciler_common.go:293] "Volume detached for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/14c9be95-e47d-4358-9b75-ab8188aeff38-rbac\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.586116 4687 generic.go:334] "Generic (PLEG): container finished" podID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerID="e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce" exitCode=0 Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.586305 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" event={"ID":"14c9be95-e47d-4358-9b75-ab8188aeff38","Type":"ContainerDied","Data":"e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce"} Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.587485 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" event={"ID":"14c9be95-e47d-4358-9b75-ab8188aeff38","Type":"ContainerDied","Data":"575e62c7a4e167f7bc03d235bafb9c928c5a8a3bfcf462f4fbab85f64fa798a1"} Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.587519 4687 scope.go:117] "RemoveContainer" containerID="d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.586412 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7847845898-9cpdv" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.625470 4687 scope.go:117] "RemoveContainer" containerID="e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.663340 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.663423 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.663850 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-9cpdv"] Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.673266 4687 scope.go:117] "RemoveContainer" containerID="d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836" Mar 12 16:49:06 crc kubenswrapper[4687]: E0312 16:49:06.677483 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836\": container with ID starting with d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836 not found: ID does not exist" containerID="d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.677542 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836"} err="failed to get container status \"d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836\": rpc error: code = NotFound desc = could not find container \"d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836\": container with ID starting with d55e6d0fc17a67f545b1a68a2d03a2e507a42d7bc9f9d249a73c66bd46e30836 not found: ID does not exist" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.677574 4687 scope.go:117] "RemoveContainer" containerID="e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce" Mar 12 16:49:06 crc kubenswrapper[4687]: E0312 16:49:06.678433 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce\": container with ID starting with e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce not found: ID does not exist" containerID="e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.678468 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce"} err="failed to get container status \"e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce\": rpc error: code = NotFound desc = could not find container \"e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce\": container with ID starting with e2332b38a2c3e54bdd4f60a0d64f78ea9476e6cfcc6c18e0636f833761af90ce not found: ID does not exist" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.682420 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-gateway-7847845898-9cpdv"] Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.757705 4687 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.757762 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.842698 4687 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:06 crc kubenswrapper[4687]: I0312 16:49:06.842935 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:07 crc kubenswrapper[4687]: I0312 16:49:07.750668 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" path="/var/lib/kubelet/pods/14c9be95-e47d-4358-9b75-ab8188aeff38/volumes" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.551792 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69p8v"] Mar 12 16:49:11 crc kubenswrapper[4687]: E0312 16:49:11.553329 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="opa" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553345 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="opa" Mar 12 16:49:11 crc kubenswrapper[4687]: E0312 16:49:11.553362 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="gateway" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553419 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="gateway" Mar 12 16:49:11 crc kubenswrapper[4687]: E0312 16:49:11.553448 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="gateway" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553456 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="gateway" Mar 12 16:49:11 crc kubenswrapper[4687]: E0312 16:49:11.553496 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="opa" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553504 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="opa" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553800 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="opa" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553824 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c9be95-e47d-4358-9b75-ab8188aeff38" containerName="gateway" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553835 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="gateway" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.553849 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4762e3-e056-423a-b130-487353e771ed" containerName="opa" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.556737 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.571107 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69p8v"] Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.601084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vwq\" (UniqueName: \"kubernetes.io/projected/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-kube-api-access-s4vwq\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.601461 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-catalog-content\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.601574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-utilities\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.702728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vwq\" (UniqueName: \"kubernetes.io/projected/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-kube-api-access-s4vwq\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.702813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-catalog-content\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.702884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-utilities\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.703656 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-utilities\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.703705 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-catalog-content\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.721713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vwq\" (UniqueName: \"kubernetes.io/projected/ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095-kube-api-access-s4vwq\") pod \"community-operators-69p8v\" (UID: \"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095\") " pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:11 crc kubenswrapper[4687]: I0312 16:49:11.883396 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:12 crc kubenswrapper[4687]: I0312 16:49:12.461670 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69p8v"] Mar 12 16:49:12 crc kubenswrapper[4687]: W0312 16:49:12.516833 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8dfd8e_ec34_4ec7_a2cd_bfc3233a8095.slice/crio-0b8280c9bff7338602f2602dbe9d94d8bbe87b2f9ac7e80e9c17458a049e5e15 WatchSource:0}: Error finding container 0b8280c9bff7338602f2602dbe9d94d8bbe87b2f9ac7e80e9c17458a049e5e15: Status 404 returned error can't find the container with id 0b8280c9bff7338602f2602dbe9d94d8bbe87b2f9ac7e80e9c17458a049e5e15 Mar 12 16:49:12 crc kubenswrapper[4687]: I0312 16:49:12.663293 4687 generic.go:334] "Generic (PLEG): container finished" podID="3c549521-57d3-4f63-b447-5700df3e3a47" containerID="be1a3e2829a14818b9d84c32d79b802988d088d541b5c2399d70266f2e3e5761" exitCode=0 Mar 12 16:49:12 crc kubenswrapper[4687]: I0312 16:49:12.663324 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" event={"ID":"3c549521-57d3-4f63-b447-5700df3e3a47","Type":"ContainerDied","Data":"be1a3e2829a14818b9d84c32d79b802988d088d541b5c2399d70266f2e3e5761"} Mar 12 16:49:12 crc kubenswrapper[4687]: I0312 16:49:12.666629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69p8v" event={"ID":"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095","Type":"ContainerStarted","Data":"0b8280c9bff7338602f2602dbe9d94d8bbe87b2f9ac7e80e9c17458a049e5e15"} Mar 12 16:49:13 crc kubenswrapper[4687]: I0312 16:49:13.686569 4687 generic.go:334] "Generic (PLEG): container finished" podID="ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095" containerID="76e429e4a9983db7da95f4b0d54594db29a7abfed2f93a2d7519090fdb697a08" exitCode=0 Mar 12 16:49:13 crc kubenswrapper[4687]: I0312 16:49:13.686665 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69p8v" event={"ID":"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095","Type":"ContainerDied","Data":"76e429e4a9983db7da95f4b0d54594db29a7abfed2f93a2d7519090fdb697a08"} Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.217117 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.366948 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-inventory\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.367223 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-2\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.367397 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-1\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.367713 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-telemetry-combined-ca-bundle\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.367866 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shg6p\" (UniqueName: \"kubernetes.io/projected/3c549521-57d3-4f63-b447-5700df3e3a47-kube-api-access-shg6p\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.367931 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ssh-key-openstack-edpm-ipam\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.368025 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-0\") pod \"3c549521-57d3-4f63-b447-5700df3e3a47\" (UID: \"3c549521-57d3-4f63-b447-5700df3e3a47\") " Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.373452 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c549521-57d3-4f63-b447-5700df3e3a47-kube-api-access-shg6p" (OuterVolumeSpecName: "kube-api-access-shg6p") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "kube-api-access-shg6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.373501 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.402181 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.404253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.405833 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.411002 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.422493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-inventory" (OuterVolumeSpecName: "inventory") pod "3c549521-57d3-4f63-b447-5700df3e3a47" (UID: "3c549521-57d3-4f63-b447-5700df3e3a47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472357 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472414 4687 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472425 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shg6p\" (UniqueName: \"kubernetes.io/projected/3c549521-57d3-4f63-b447-5700df3e3a47-kube-api-access-shg6p\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472435 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472444 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472454 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.472463 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3c549521-57d3-4f63-b447-5700df3e3a47-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.699704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" event={"ID":"3c549521-57d3-4f63-b447-5700df3e3a47","Type":"ContainerDied","Data":"ca7d2c4c933f281c713153c74b262b9afe82c6aab26af262e992bf9b2de88c03"} Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.700832 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7d2c4c933f281c713153c74b262b9afe82c6aab26af262e992bf9b2de88c03" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.699735 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.786769 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95"] Mar 12 16:49:14 crc kubenswrapper[4687]: E0312 16:49:14.787495 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c549521-57d3-4f63-b447-5700df3e3a47" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.787520 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c549521-57d3-4f63-b447-5700df3e3a47" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.787810 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c549521-57d3-4f63-b447-5700df3e3a47" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.788965 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.796961 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.797010 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.797162 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.797212 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.797498 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.818712 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95"] Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881305 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqchd\" (UniqueName: \"kubernetes.io/projected/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-kube-api-access-vqchd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881614 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881676 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881722 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.881917 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985555 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985629 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985740 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985775 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqchd\" (UniqueName: \"kubernetes.io/projected/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-kube-api-access-vqchd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985877 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.985895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.990132 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.990280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.992615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:14 crc kubenswrapper[4687]: I0312 16:49:14.995305 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:15 crc kubenswrapper[4687]: I0312 16:49:15.003960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:15 crc kubenswrapper[4687]: I0312 16:49:15.004735 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqchd\" (UniqueName: \"kubernetes.io/projected/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-kube-api-access-vqchd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:15 crc kubenswrapper[4687]: I0312 16:49:15.007962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:15 crc kubenswrapper[4687]: I0312 16:49:15.110790 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:49:15 crc kubenswrapper[4687]: I0312 16:49:15.846270 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95"] Mar 12 16:49:15 crc kubenswrapper[4687]: W0312 16:49:15.849785 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72ce7c15_b60f_47fc_93d8_68d5dda4b9ec.slice/crio-b998997eb8fba31abd3be13ecee48768aee9c20a2405566251819c8bbbab17df WatchSource:0}: Error finding container b998997eb8fba31abd3be13ecee48768aee9c20a2405566251819c8bbbab17df: Status 404 returned error can't find the container with id b998997eb8fba31abd3be13ecee48768aee9c20a2405566251819c8bbbab17df Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.647515 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.647567 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.647633 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.721034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" event={"ID":"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec","Type":"ContainerStarted","Data":"b998997eb8fba31abd3be13ecee48768aee9c20a2405566251819c8bbbab17df"} Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.755079 4687 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.755133 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.755559 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.830406 4687 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.830460 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:16 crc kubenswrapper[4687]: I0312 16:49:16.830539 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:20 crc kubenswrapper[4687]: I0312 16:49:20.780084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" event={"ID":"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec","Type":"ContainerStarted","Data":"77095e59f42f9c018ebc8c51e90a79364986df3ff273a4fe9462cb6bc51d5e48"} Mar 12 16:49:20 crc kubenswrapper[4687]: I0312 16:49:20.785843 4687 generic.go:334] "Generic (PLEG): container finished" podID="ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095" containerID="19c3b07748c58bf27d306c4be62a46d0a08ec2a061a1f63f12f5f70244f39bf1" exitCode=0 Mar 12 16:49:20 crc kubenswrapper[4687]: I0312 16:49:20.785889 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69p8v" event={"ID":"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095","Type":"ContainerDied","Data":"19c3b07748c58bf27d306c4be62a46d0a08ec2a061a1f63f12f5f70244f39bf1"} Mar 12 16:49:20 crc kubenswrapper[4687]: I0312 16:49:20.805429 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" podStartSLOduration=2.945037305 podStartE2EDuration="6.805410222s" podCreationTimestamp="2026-03-12 16:49:14 +0000 UTC" firstStartedPulling="2026-03-12 16:49:15.856182455 +0000 UTC m=+2804.820144799" lastFinishedPulling="2026-03-12 16:49:19.716555372 +0000 UTC m=+2808.680517716" observedRunningTime="2026-03-12 16:49:20.800928339 +0000 UTC m=+2809.764890683" watchObservedRunningTime="2026-03-12 16:49:20.805410222 +0000 UTC m=+2809.769372576" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.073816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.135756 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2"] Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.135967 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" containerID="cri-o://541031a8847e15f19cb40a3dafd43ee51a05693a4687976728feaabd4b0eabd1" gracePeriod=30 Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.162751 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.259730 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-blj52"] Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.259925 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" containerID="cri-o://5b0090e754e862e3c3a96c9e0911ab964477142974a0441f6626a34602552301" gracePeriod=30 Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.274147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.351774 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5"] Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.351958 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" podUID="9a12135c-5a4e-490c-9a7a-2e741084697d" containerName="loki-query-frontend" containerID="cri-o://7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614" gracePeriod=30 Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.489454 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.611428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-s3\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.615592 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.619117 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"wal\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.619496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-http\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.620021 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ca-bundle\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.620563 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-grpc\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.620704 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qlp2\" (UniqueName: \"kubernetes.io/projected/c064f367-224d-4c76-b932-4dcd72d61b85-kube-api-access-6qlp2\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.620796 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-config\") pod \"c064f367-224d-4c76-b932-4dcd72d61b85\" (UID: \"c064f367-224d-4c76-b932-4dcd72d61b85\") " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.622141 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-s3" (OuterVolumeSpecName: "logging-loki-s3") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "logging-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.622324 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.622735 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-config" (OuterVolumeSpecName: "config") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.623510 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-s3\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.623535 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.623549 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c064f367-224d-4c76-b932-4dcd72d61b85-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.628632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-grpc" (OuterVolumeSpecName: "logging-loki-ingester-grpc") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "logging-loki-ingester-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.628723 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-http" (OuterVolumeSpecName: "logging-loki-ingester-http") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "logging-loki-ingester-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.642186 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c064f367-224d-4c76-b932-4dcd72d61b85-kube-api-access-6qlp2" (OuterVolumeSpecName: "kube-api-access-6qlp2") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "kube-api-access-6qlp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.668209 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5" (OuterVolumeSpecName: "wal") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "pvc-d0c5ad02-d16a-4375-86af-574211989ef5". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.668233 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422" (OuterVolumeSpecName: "storage") pod "c064f367-224d-4c76-b932-4dcd72d61b85" (UID: "c064f367-224d-4c76-b932-4dcd72d61b85"). InnerVolumeSpecName "pvc-ba232192-4f24-4cd7-8760-3babafa55422". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.730048 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") on node \"crc\" " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.730084 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") on node \"crc\" " Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.730096 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.730106 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c064f367-224d-4c76-b932-4dcd72d61b85-logging-loki-ingester-grpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.730116 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qlp2\" (UniqueName: \"kubernetes.io/projected/c064f367-224d-4c76-b932-4dcd72d61b85-kube-api-access-6qlp2\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.794658 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.794809 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d0c5ad02-d16a-4375-86af-574211989ef5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5") on node "crc" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.813710 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.813911 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ba232192-4f24-4cd7-8760-3babafa55422" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422") on node "crc" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.830916 4687 generic.go:334] "Generic (PLEG): container finished" podID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerID="31a92a22b1d3049f36d2932f00ae91d186801ebf59c384e2d4af16f88ec29366" exitCode=137 Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.830980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5988e7b5-79d2-42db-aa38-0ba4224cac87","Type":"ContainerDied","Data":"31a92a22b1d3049f36d2932f00ae91d186801ebf59c384e2d4af16f88ec29366"} Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.832433 4687 generic.go:334] "Generic (PLEG): container finished" podID="c064f367-224d-4c76-b932-4dcd72d61b85" containerID="b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1" exitCode=137 Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.832471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"c064f367-224d-4c76-b932-4dcd72d61b85","Type":"ContainerDied","Data":"b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1"} Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.832484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"c064f367-224d-4c76-b932-4dcd72d61b85","Type":"ContainerDied","Data":"e1b69bc34fcef8fd8edebc64e1e142640c26c4477ff7ef39b8c20e5ab3f40619"} Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.832501 4687 scope.go:117] "RemoveContainer" containerID="b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.832615 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.848680 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.848703 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.862584 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69p8v" event={"ID":"ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095","Type":"ContainerStarted","Data":"a21a53d29daa77ceae49a3c169368d811ebdb9959be2fa42b73b1a1a192b904e"} Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.888466 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69p8v" podStartSLOduration=3.8438352 podStartE2EDuration="11.888450792s" podCreationTimestamp="2026-03-12 16:49:11 +0000 UTC" firstStartedPulling="2026-03-12 16:49:13.688520052 +0000 UTC m=+2802.652482396" lastFinishedPulling="2026-03-12 16:49:21.733135644 +0000 UTC m=+2810.697097988" observedRunningTime="2026-03-12 16:49:22.886261412 +0000 UTC m=+2811.850223756" watchObservedRunningTime="2026-03-12 16:49:22.888450792 +0000 UTC m=+2811.852413136" Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.889601 4687 generic.go:334] "Generic (PLEG): container finished" podID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerID="21c54ee8fa5a1723869ed6bd1240c2446db2a05934d7e0f6a51b4fb501e35f88" exitCode=137 Mar 12 16:49:22 crc kubenswrapper[4687]: I0312 16:49:22.889638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"903e7127-e866-4a6f-b55c-d5efa486ed88","Type":"ContainerDied","Data":"21c54ee8fa5a1723869ed6bd1240c2446db2a05934d7e0f6a51b4fb501e35f88"} Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.054425 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.061445 4687 scope.go:117] "RemoveContainer" containerID="b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1" Mar 12 16:49:23 crc kubenswrapper[4687]: E0312 16:49:23.066863 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1\": container with ID starting with b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1 not found: ID does not exist" containerID="b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.066903 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1"} err="failed to get container status \"b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1\": rpc error: code = NotFound desc = could not find container \"b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1\": container with ID starting with b9d90cdd5911939f2906d86ae1188c8a97a4546d82bcc500fea1da24ef0bedc1 not found: ID does not exist" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.076578 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.106122 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:49:23 crc kubenswrapper[4687]: E0312 16:49:23.106761 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.106778 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.106989 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" containerName="loki-ingester" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.107855 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.117174 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.120383 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.140115 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.140484 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.148156 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.255614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-http\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256397 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256454 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-ca-bundle\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256489 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-s3\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256528 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f544\" (UniqueName: \"kubernetes.io/projected/903e7127-e866-4a6f-b55c-d5efa486ed88-kube-api-access-4f544\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256593 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-http\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256644 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-config\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.256721 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-grpc\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.257610 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-config" (OuterVolumeSpecName: "config") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.257893 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-ca-bundle\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.257958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-s3\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.258130 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.258204 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.258377 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-grpc\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.260989 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.261822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-s3" (OuterVolumeSpecName: "logging-loki-s3") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "logging-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.261945 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-config\") pod \"903e7127-e866-4a6f-b55c-d5efa486ed88\" (UID: \"903e7127-e866-4a6f-b55c-d5efa486ed88\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.262353 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7kk\" (UniqueName: \"kubernetes.io/projected/5988e7b5-79d2-42db-aa38-0ba4224cac87-kube-api-access-vl7kk\") pod \"5988e7b5-79d2-42db-aa38-0ba4224cac87\" (UID: \"5988e7b5-79d2-42db-aa38-0ba4224cac87\") " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.262372 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-grpc" (OuterVolumeSpecName: "logging-loki-index-gateway-grpc") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "logging-loki-index-gateway-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.262428 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903e7127-e866-4a6f-b55c-d5efa486ed88-kube-api-access-4f544" (OuterVolumeSpecName: "kube-api-access-4f544") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "kube-api-access-4f544". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.263773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.263827 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-http" (OuterVolumeSpecName: "logging-loki-compactor-http") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "logging-loki-compactor-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.263989 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-s3" (OuterVolumeSpecName: "logging-loki-s3") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "logging-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264066 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-http" (OuterVolumeSpecName: "logging-loki-index-gateway-http") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "logging-loki-index-gateway-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264086 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264471 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-grpc" (OuterVolumeSpecName: "logging-loki-compactor-grpc") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "logging-loki-compactor-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264619 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-config\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.264937 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265001 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-config" (OuterVolumeSpecName: "config") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265037 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5qm6\" (UniqueName: \"kubernetes.io/projected/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-kube-api-access-h5qm6\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265234 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-grpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265252 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265263 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-s3\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265271 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-grpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265284 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/903e7127-e866-4a6f-b55c-d5efa486ed88-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265294 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-index-gateway-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265305 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265314 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5988e7b5-79d2-42db-aa38-0ba4224cac87-logging-loki-s3\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265321 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f544\" (UniqueName: \"kubernetes.io/projected/903e7127-e866-4a6f-b55c-d5efa486ed88-kube-api-access-4f544\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265330 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/903e7127-e866-4a6f-b55c-d5efa486ed88-logging-loki-compactor-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.265338 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5988e7b5-79d2-42db-aa38-0ba4224cac87-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.268997 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5988e7b5-79d2-42db-aa38-0ba4224cac87-kube-api-access-vl7kk" (OuterVolumeSpecName: "kube-api-access-vl7kk") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "kube-api-access-vl7kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.287258 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7" (OuterVolumeSpecName: "storage") pod "903e7127-e866-4a6f-b55c-d5efa486ed88" (UID: "903e7127-e866-4a6f-b55c-d5efa486ed88"). InnerVolumeSpecName "pvc-e3755688-e20d-46d7-92c1-a244383e80e7". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.289838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232" (OuterVolumeSpecName: "storage") pod "5988e7b5-79d2-42db-aa38-0ba4224cac87" (UID: "5988e7b5-79d2-42db-aa38-0ba4224cac87"). InnerVolumeSpecName "pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.367004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.367974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5qm6\" (UniqueName: \"kubernetes.io/projected/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-kube-api-access-h5qm6\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368135 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368266 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-config\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368584 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368674 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7kk\" (UniqueName: \"kubernetes.io/projected/5988e7b5-79d2-42db-aa38-0ba4224cac87-kube-api-access-vl7kk\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368706 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") on node \"crc\" " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.368727 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") on node \"crc\" " Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.370631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.371584 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-config\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.372660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.372818 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.373296 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.373343 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e112b0e29892c9082e00bb3a09468a76dfb49ba96d4032021a89591e57977612/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.373432 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.373502 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9bffae0546d96168a8a4c72635781c695e8137ed3a747b95302275473e78a4e9/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.374581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.393118 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5qm6\" (UniqueName: \"kubernetes.io/projected/2dc7c2f7-1478-4385-be1a-a2257e4dc2d3-kube-api-access-h5qm6\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.409536 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.409711 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232") on node "crc" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.417709 4687 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.417852 4687 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e3755688-e20d-46d7-92c1-a244383e80e7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7") on node "crc" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.458653 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d0c5ad02-d16a-4375-86af-574211989ef5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0c5ad02-d16a-4375-86af-574211989ef5\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.472462 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.472501 4687 reconciler_common.go:293] "Volume detached for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.481435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba232192-4f24-4cd7-8760-3babafa55422\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba232192-4f24-4cd7-8760-3babafa55422\") pod \"logging-loki-ingester-0\" (UID: \"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3\") " pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.755064 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c064f367-224d-4c76-b932-4dcd72d61b85" path="/var/lib/kubelet/pods/c064f367-224d-4c76-b932-4dcd72d61b85/volumes" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.768144 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.907824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"903e7127-e866-4a6f-b55c-d5efa486ed88","Type":"ContainerDied","Data":"a94c1f05f9c4c6897f7504c3168da5e9de5663a3caa06ee5bec79cf9f145f171"} Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.907863 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.907881 4687 scope.go:117] "RemoveContainer" containerID="21c54ee8fa5a1723869ed6bd1240c2446db2a05934d7e0f6a51b4fb501e35f88" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.915570 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.916438 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5988e7b5-79d2-42db-aa38-0ba4224cac87","Type":"ContainerDied","Data":"2d064957803bea20a7b10d6c8d8072e850ae65f348b8d5d91d5583df8b60a2e2"} Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.945021 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.983487 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:49:23 crc kubenswrapper[4687]: I0312 16:49:23.996102 4687 scope.go:117] "RemoveContainer" containerID="31a92a22b1d3049f36d2932f00ae91d186801ebf59c384e2d4af16f88ec29366" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.019471 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.036200 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: E0312 16:49:24.036751 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.036768 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" Mar 12 16:49:24 crc kubenswrapper[4687]: E0312 16:49:24.036787 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.036794 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.037073 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" containerName="loki-index-gateway" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.037099 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" containerName="loki-compactor" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.037931 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.040428 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.040672 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.049101 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.064763 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.085396 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.086952 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.091748 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.091934 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.106352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.106709 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.106784 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.106831 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrdpc\" (UniqueName: \"kubernetes.io/projected/f017a70e-cb13-441a-a70c-0809569c1c52-kube-api-access-hrdpc\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.107012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.107113 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.107241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f017a70e-cb13-441a-a70c-0809569c1c52-config\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.115513 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.209782 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.209883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.209915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.209933 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrdpc\" (UniqueName: \"kubernetes.io/projected/f017a70e-cb13-441a-a70c-0809569c1c52-kube-api-access-hrdpc\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.209956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56100a9-2dec-4a46-a619-6922b78f7e16-config\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.209988 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210034 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210053 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6fx\" (UniqueName: \"kubernetes.io/projected/f56100a9-2dec-4a46-a619-6922b78f7e16-kube-api-access-wp6fx\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f017a70e-cb13-441a-a70c-0809569c1c52-config\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210196 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210225 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.210866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.211809 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f017a70e-cb13-441a-a70c-0809569c1c52-config\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.216412 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.217078 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.226338 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.226458 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10e4a4756392c350b6812219504826b8241bb985c5813d60bce0f155df6e09ec/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.230407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f017a70e-cb13-441a-a70c-0809569c1c52-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.231352 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrdpc\" (UniqueName: \"kubernetes.io/projected/f017a70e-cb13-441a-a70c-0809569c1c52-kube-api-access-hrdpc\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.281805 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: W0312 16:49:24.282581 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc7c2f7_1478_4385_be1a_a2257e4dc2d3.slice/crio-c655e6fa99dac0e73f61ff6a8540f0d902037d5e22ee766c67881056345874c6 WatchSource:0}: Error finding container c655e6fa99dac0e73f61ff6a8540f0d902037d5e22ee766c67881056345874c6: Status 404 returned error can't find the container with id c655e6fa99dac0e73f61ff6a8540f0d902037d5e22ee766c67881056345874c6 Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.292462 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3755688-e20d-46d7-92c1-a244383e80e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3755688-e20d-46d7-92c1-a244383e80e7\") pod \"logging-loki-compactor-0\" (UID: \"f017a70e-cb13-441a-a70c-0809569c1c52\") " pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312000 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312204 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56100a9-2dec-4a46-a619-6922b78f7e16-config\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312244 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6fx\" (UniqueName: \"kubernetes.io/projected/f56100a9-2dec-4a46-a619-6922b78f7e16-kube-api-access-wp6fx\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.312326 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.314587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56100a9-2dec-4a46-a619-6922b78f7e16-config\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.314695 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.317406 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.317871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.318749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/f56100a9-2dec-4a46-a619-6922b78f7e16-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.319476 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.319503 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94a21af1b3ca60f5b53cf1801d45610b9f1530d7b95a3e9060844383772a149d/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.333622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6fx\" (UniqueName: \"kubernetes.io/projected/f56100a9-2dec-4a46-a619-6922b78f7e16-kube-api-access-wp6fx\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.363570 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.372192 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d0d4e18-07c1-4bad-bf17-9c94a84a4232\") pod \"logging-loki-index-gateway-0\" (UID: \"f56100a9-2dec-4a46-a619-6922b78f7e16\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.427479 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.874797 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 12 16:49:24 crc kubenswrapper[4687]: W0312 16:49:24.881560 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf017a70e_cb13_441a_a70c_0809569c1c52.slice/crio-bdecbabfa48f52d4cb6f727251ca8ba9d1c336bbf1d999e24382df64662e7307 WatchSource:0}: Error finding container bdecbabfa48f52d4cb6f727251ca8ba9d1c336bbf1d999e24382df64662e7307: Status 404 returned error can't find the container with id bdecbabfa48f52d4cb6f727251ca8ba9d1c336bbf1d999e24382df64662e7307 Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.931279 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3","Type":"ContainerStarted","Data":"69d657863c823ab1268f38c2c166fc65ce39381451c0963ae472884e331547b5"} Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.931399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"2dc7c2f7-1478-4385-be1a-a2257e4dc2d3","Type":"ContainerStarted","Data":"c655e6fa99dac0e73f61ff6a8540f0d902037d5e22ee766c67881056345874c6"} Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.932006 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.950429 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"f017a70e-cb13-441a-a70c-0809569c1c52","Type":"ContainerStarted","Data":"bdecbabfa48f52d4cb6f727251ca8ba9d1c336bbf1d999e24382df64662e7307"} Mar 12 16:49:24 crc kubenswrapper[4687]: I0312 16:49:24.978579 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=1.978542983 podStartE2EDuration="1.978542983s" podCreationTimestamp="2026-03-12 16:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:49:24.957167449 +0000 UTC m=+2813.921129793" watchObservedRunningTime="2026-03-12 16:49:24.978542983 +0000 UTC m=+2813.942505327" Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.036471 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.466630 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-mm5q2 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.466682 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.656877 4687 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-blj52 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.657312 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.761691 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5988e7b5-79d2-42db-aa38-0ba4224cac87" path="/var/lib/kubelet/pods/5988e7b5-79d2-42db-aa38-0ba4224cac87/volumes" Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.762733 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903e7127-e866-4a6f-b55c-d5efa486ed88" path="/var/lib/kubelet/pods/903e7127-e866-4a6f-b55c-d5efa486ed88/volumes" Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.967846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"f017a70e-cb13-441a-a70c-0809569c1c52","Type":"ContainerStarted","Data":"e1ac9c2f7537c3ab48337107d4cbca36116f5e968c08f891c41f81c69f7cee91"} Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.968114 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.973304 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"f56100a9-2dec-4a46-a619-6922b78f7e16","Type":"ContainerStarted","Data":"7af52c521f4f1785666d163e56ba0902c3c0420d1f6e33ae4b21316d2544154c"} Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.973379 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"f56100a9-2dec-4a46-a619-6922b78f7e16","Type":"ContainerStarted","Data":"023abdad2a1b7aa280c4c7afc2f73ea99d909cfe54520614d6bc77e1dd1cdc6e"} Mar 12 16:49:25 crc kubenswrapper[4687]: I0312 16:49:25.973491 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:26 crc kubenswrapper[4687]: I0312 16:49:26.010172 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.010151667 podStartE2EDuration="3.010151667s" podCreationTimestamp="2026-03-12 16:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:49:25.987912249 +0000 UTC m=+2814.951874583" watchObservedRunningTime="2026-03-12 16:49:26.010151667 +0000 UTC m=+2814.974114011" Mar 12 16:49:26 crc kubenswrapper[4687]: I0312 16:49:26.016387 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.016363617 podStartE2EDuration="3.016363617s" podCreationTimestamp="2026-03-12 16:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 16:49:26.005009806 +0000 UTC m=+2814.968972190" watchObservedRunningTime="2026-03-12 16:49:26.016363617 +0000 UTC m=+2814.980325961" Mar 12 16:49:31 crc kubenswrapper[4687]: I0312 16:49:31.883733 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:31 crc kubenswrapper[4687]: I0312 16:49:31.884768 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:31 crc kubenswrapper[4687]: I0312 16:49:31.954465 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:32 crc kubenswrapper[4687]: I0312 16:49:32.131020 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69p8v" Mar 12 16:49:32 crc kubenswrapper[4687]: I0312 16:49:32.267163 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69p8v"] Mar 12 16:49:32 crc kubenswrapper[4687]: I0312 16:49:32.332989 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmq5m"] Mar 12 16:49:32 crc kubenswrapper[4687]: I0312 16:49:32.333284 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lmq5m" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="registry-server" containerID="cri-o://4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a" gracePeriod=2 Mar 12 16:49:32 crc kubenswrapper[4687]: I0312 16:49:32.879786 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.057425 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpdcv\" (UniqueName: \"kubernetes.io/projected/268cba67-b44a-4d37-8567-475c1fd27a20-kube-api-access-cpdcv\") pod \"268cba67-b44a-4d37-8567-475c1fd27a20\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.057595 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-catalog-content\") pod \"268cba67-b44a-4d37-8567-475c1fd27a20\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.057700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-utilities\") pod \"268cba67-b44a-4d37-8567-475c1fd27a20\" (UID: \"268cba67-b44a-4d37-8567-475c1fd27a20\") " Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.065235 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-utilities" (OuterVolumeSpecName: "utilities") pod "268cba67-b44a-4d37-8567-475c1fd27a20" (UID: "268cba67-b44a-4d37-8567-475c1fd27a20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.069059 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268cba67-b44a-4d37-8567-475c1fd27a20-kube-api-access-cpdcv" (OuterVolumeSpecName: "kube-api-access-cpdcv") pod "268cba67-b44a-4d37-8567-475c1fd27a20" (UID: "268cba67-b44a-4d37-8567-475c1fd27a20"). InnerVolumeSpecName "kube-api-access-cpdcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.069823 4687 generic.go:334] "Generic (PLEG): container finished" podID="268cba67-b44a-4d37-8567-475c1fd27a20" containerID="4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a" exitCode=0 Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.069913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerDied","Data":"4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a"} Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.069950 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmq5m" event={"ID":"268cba67-b44a-4d37-8567-475c1fd27a20","Type":"ContainerDied","Data":"87951ebbfc6dd23f06ac654c4cda3fafc100b1ac5dac0f638204f7131e9b5114"} Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.069973 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmq5m" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.069974 4687 scope.go:117] "RemoveContainer" containerID="4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.129451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "268cba67-b44a-4d37-8567-475c1fd27a20" (UID: "268cba67-b44a-4d37-8567-475c1fd27a20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.141256 4687 scope.go:117] "RemoveContainer" containerID="dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.160578 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpdcv\" (UniqueName: \"kubernetes.io/projected/268cba67-b44a-4d37-8567-475c1fd27a20-kube-api-access-cpdcv\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.160617 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.160630 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/268cba67-b44a-4d37-8567-475c1fd27a20-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.187281 4687 scope.go:117] "RemoveContainer" containerID="f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.225463 4687 scope.go:117] "RemoveContainer" containerID="4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a" Mar 12 16:49:33 crc kubenswrapper[4687]: E0312 16:49:33.225976 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a\": container with ID starting with 4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a not found: ID does not exist" containerID="4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.226024 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a"} err="failed to get container status \"4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a\": rpc error: code = NotFound desc = could not find container \"4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a\": container with ID starting with 4a49b93d687fe562220f54d3bcf5aa3c546e4324dd8c603a1ae3c2a23d2f0f8a not found: ID does not exist" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.226050 4687 scope.go:117] "RemoveContainer" containerID="dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee" Mar 12 16:49:33 crc kubenswrapper[4687]: E0312 16:49:33.226513 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee\": container with ID starting with dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee not found: ID does not exist" containerID="dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.226638 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee"} err="failed to get container status \"dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee\": rpc error: code = NotFound desc = could not find container \"dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee\": container with ID starting with dc2d481e3d287f253daeaee6a6b4aa010b48d240ab09572abcec8404f3f190ee not found: ID does not exist" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.226744 4687 scope.go:117] "RemoveContainer" containerID="f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144" Mar 12 16:49:33 crc kubenswrapper[4687]: E0312 16:49:33.227079 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144\": container with ID starting with f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144 not found: ID does not exist" containerID="f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.227110 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144"} err="failed to get container status \"f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144\": rpc error: code = NotFound desc = could not find container \"f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144\": container with ID starting with f7e11f2ecfaf153eeb2a21599ffcaf785def254daf9f8c59bd4d31c6cd95d144 not found: ID does not exist" Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.409584 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lmq5m"] Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.420600 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lmq5m"] Mar 12 16:49:33 crc kubenswrapper[4687]: I0312 16:49:33.750399 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" path="/var/lib/kubelet/pods/268cba67-b44a-4d37-8567-475c1fd27a20/volumes" Mar 12 16:49:34 crc kubenswrapper[4687]: I0312 16:49:34.134675 4687 scope.go:117] "RemoveContainer" containerID="4a3b2416cdf144bea347dca9688c71c6671ad10a1937fa2e35e396b19d8f944f" Mar 12 16:49:34 crc kubenswrapper[4687]: I0312 16:49:34.167164 4687 scope.go:117] "RemoveContainer" containerID="bb5b631421b369f85e9080f08fbd387eeea33b54785e7e5427600c1b52cfe6b1" Mar 12 16:49:34 crc kubenswrapper[4687]: I0312 16:49:34.229173 4687 scope.go:117] "RemoveContainer" containerID="19fd986551a22726d75cda3e83f21ecf11aafd2d9d5117efc0d17b66343ddae7" Mar 12 16:49:35 crc kubenswrapper[4687]: I0312 16:49:35.466058 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-mm5q2 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:35 crc kubenswrapper[4687]: I0312 16:49:35.466472 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:35 crc kubenswrapper[4687]: I0312 16:49:35.655225 4687 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-blj52 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:35 crc kubenswrapper[4687]: I0312 16:49:35.655270 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:43 crc kubenswrapper[4687]: I0312 16:49:43.776809 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 12 16:49:43 crc kubenswrapper[4687]: I0312 16:49:43.777405 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:44 crc kubenswrapper[4687]: I0312 16:49:44.370910 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 12 16:49:44 crc kubenswrapper[4687]: I0312 16:49:44.433794 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 12 16:49:45 crc kubenswrapper[4687]: I0312 16:49:45.467971 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-mm5q2 container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:45 crc kubenswrapper[4687]: I0312 16:49:45.468307 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:45 crc kubenswrapper[4687]: I0312 16:49:45.468414 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:49:45 crc kubenswrapper[4687]: I0312 16:49:45.655682 4687 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-blj52 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Application is stopping Mar 12 16:49:45 crc kubenswrapper[4687]: I0312 16:49:45.655750 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:49:45 crc kubenswrapper[4687]: I0312 16:49:45.655833 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.311550 4687 generic.go:334] "Generic (PLEG): container finished" podID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerID="5b0090e754e862e3c3a96c9e0911ab964477142974a0441f6626a34602552301" exitCode=137 Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.311642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" event={"ID":"acd169c8-4496-4d03-a06b-de5cd486e7ed","Type":"ContainerDied","Data":"5b0090e754e862e3c3a96c9e0911ab964477142974a0441f6626a34602552301"} Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.315093 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerID="541031a8847e15f19cb40a3dafd43ee51a05693a4687976728feaabd4b0eabd1" exitCode=137 Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.315130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" event={"ID":"d5585f46-790b-4a19-b276-9c185d49e5fb","Type":"ContainerDied","Data":"541031a8847e15f19cb40a3dafd43ee51a05693a4687976728feaabd4b0eabd1"} Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.594128 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.681020 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-http\") pod \"d5585f46-790b-4a19-b276-9c185d49e5fb\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.682027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-config\") pod \"d5585f46-790b-4a19-b276-9c185d49e5fb\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.682577 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-config" (OuterVolumeSpecName: "config") pod "d5585f46-790b-4a19-b276-9c185d49e5fb" (UID: "d5585f46-790b-4a19-b276-9c185d49e5fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.682746 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-grpc\") pod \"d5585f46-790b-4a19-b276-9c185d49e5fb\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.682794 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-ca-bundle\") pod \"d5585f46-790b-4a19-b276-9c185d49e5fb\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.683319 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "d5585f46-790b-4a19-b276-9c185d49e5fb" (UID: "d5585f46-790b-4a19-b276-9c185d49e5fb"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.683450 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnjz9\" (UniqueName: \"kubernetes.io/projected/d5585f46-790b-4a19-b276-9c185d49e5fb-kube-api-access-rnjz9\") pod \"d5585f46-790b-4a19-b276-9c185d49e5fb\" (UID: \"d5585f46-790b-4a19-b276-9c185d49e5fb\") " Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.684596 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.684619 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.688715 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-grpc" (OuterVolumeSpecName: "logging-loki-distributor-grpc") pod "d5585f46-790b-4a19-b276-9c185d49e5fb" (UID: "d5585f46-790b-4a19-b276-9c185d49e5fb"). InnerVolumeSpecName "logging-loki-distributor-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.692256 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5585f46-790b-4a19-b276-9c185d49e5fb-kube-api-access-rnjz9" (OuterVolumeSpecName: "kube-api-access-rnjz9") pod "d5585f46-790b-4a19-b276-9c185d49e5fb" (UID: "d5585f46-790b-4a19-b276-9c185d49e5fb"). InnerVolumeSpecName "kube-api-access-rnjz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.692386 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-http" (OuterVolumeSpecName: "logging-loki-distributor-http") pod "d5585f46-790b-4a19-b276-9c185d49e5fb" (UID: "d5585f46-790b-4a19-b276-9c185d49e5fb"). InnerVolumeSpecName "logging-loki-distributor-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.787634 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnjz9\" (UniqueName: \"kubernetes.io/projected/d5585f46-790b-4a19-b276-9c185d49e5fb-kube-api-access-rnjz9\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.787677 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:52 crc kubenswrapper[4687]: I0312 16:49:52.787693 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d5585f46-790b-4a19-b276-9c185d49e5fb-logging-loki-distributor-grpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.048866 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.103486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b2s2\" (UniqueName: \"kubernetes.io/projected/acd169c8-4496-4d03-a06b-de5cd486e7ed-kube-api-access-8b2s2\") pod \"acd169c8-4496-4d03-a06b-de5cd486e7ed\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.103771 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-s3\") pod \"acd169c8-4496-4d03-a06b-de5cd486e7ed\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.103860 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-http\") pod \"acd169c8-4496-4d03-a06b-de5cd486e7ed\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.103909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-ca-bundle\") pod \"acd169c8-4496-4d03-a06b-de5cd486e7ed\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.103973 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-config\") pod \"acd169c8-4496-4d03-a06b-de5cd486e7ed\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.103991 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-grpc\") pod \"acd169c8-4496-4d03-a06b-de5cd486e7ed\" (UID: \"acd169c8-4496-4d03-a06b-de5cd486e7ed\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.108189 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "acd169c8-4496-4d03-a06b-de5cd486e7ed" (UID: "acd169c8-4496-4d03-a06b-de5cd486e7ed"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.109200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-config" (OuterVolumeSpecName: "config") pod "acd169c8-4496-4d03-a06b-de5cd486e7ed" (UID: "acd169c8-4496-4d03-a06b-de5cd486e7ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.109306 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-http" (OuterVolumeSpecName: "logging-loki-querier-http") pod "acd169c8-4496-4d03-a06b-de5cd486e7ed" (UID: "acd169c8-4496-4d03-a06b-de5cd486e7ed"). InnerVolumeSpecName "logging-loki-querier-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.111004 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-s3" (OuterVolumeSpecName: "logging-loki-s3") pod "acd169c8-4496-4d03-a06b-de5cd486e7ed" (UID: "acd169c8-4496-4d03-a06b-de5cd486e7ed"). InnerVolumeSpecName "logging-loki-s3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.111641 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd169c8-4496-4d03-a06b-de5cd486e7ed-kube-api-access-8b2s2" (OuterVolumeSpecName: "kube-api-access-8b2s2") pod "acd169c8-4496-4d03-a06b-de5cd486e7ed" (UID: "acd169c8-4496-4d03-a06b-de5cd486e7ed"). InnerVolumeSpecName "kube-api-access-8b2s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.111813 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-grpc" (OuterVolumeSpecName: "logging-loki-querier-grpc") pod "acd169c8-4496-4d03-a06b-de5cd486e7ed" (UID: "acd169c8-4496-4d03-a06b-de5cd486e7ed"). InnerVolumeSpecName "logging-loki-querier-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.206688 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-s3\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.206720 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.206733 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.206743 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd169c8-4496-4d03-a06b-de5cd486e7ed-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.206750 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/acd169c8-4496-4d03-a06b-de5cd486e7ed-logging-loki-querier-grpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.206760 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b2s2\" (UniqueName: \"kubernetes.io/projected/acd169c8-4496-4d03-a06b-de5cd486e7ed-kube-api-access-8b2s2\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.286746 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.308088 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-ca-bundle\") pod \"9a12135c-5a4e-490c-9a7a-2e741084697d\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.308516 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-http\") pod \"9a12135c-5a4e-490c-9a7a-2e741084697d\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.308547 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-config\") pod \"9a12135c-5a4e-490c-9a7a-2e741084697d\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.309236 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-grpc\") pod \"9a12135c-5a4e-490c-9a7a-2e741084697d\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.309293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-ca-bundle" (OuterVolumeSpecName: "logging-loki-ca-bundle") pod "9a12135c-5a4e-490c-9a7a-2e741084697d" (UID: "9a12135c-5a4e-490c-9a7a-2e741084697d"). InnerVolumeSpecName "logging-loki-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.309312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hg7\" (UniqueName: \"kubernetes.io/projected/9a12135c-5a4e-490c-9a7a-2e741084697d-kube-api-access-h8hg7\") pod \"9a12135c-5a4e-490c-9a7a-2e741084697d\" (UID: \"9a12135c-5a4e-490c-9a7a-2e741084697d\") " Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.310965 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.311412 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-config" (OuterVolumeSpecName: "config") pod "9a12135c-5a4e-490c-9a7a-2e741084697d" (UID: "9a12135c-5a4e-490c-9a7a-2e741084697d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.324636 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-grpc" (OuterVolumeSpecName: "logging-loki-query-frontend-grpc") pod "9a12135c-5a4e-490c-9a7a-2e741084697d" (UID: "9a12135c-5a4e-490c-9a7a-2e741084697d"). InnerVolumeSpecName "logging-loki-query-frontend-grpc". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.324740 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-http" (OuterVolumeSpecName: "logging-loki-query-frontend-http") pod "9a12135c-5a4e-490c-9a7a-2e741084697d" (UID: "9a12135c-5a4e-490c-9a7a-2e741084697d"). InnerVolumeSpecName "logging-loki-query-frontend-http". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.326677 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a12135c-5a4e-490c-9a7a-2e741084697d-kube-api-access-h8hg7" (OuterVolumeSpecName: "kube-api-access-h8hg7") pod "9a12135c-5a4e-490c-9a7a-2e741084697d" (UID: "9a12135c-5a4e-490c-9a7a-2e741084697d"). InnerVolumeSpecName "kube-api-access-h8hg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.333923 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" event={"ID":"acd169c8-4496-4d03-a06b-de5cd486e7ed","Type":"ContainerDied","Data":"d401f37bd3b1a7a17070986d04149ca37487609744328869732032411c1fdf8f"} Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.333967 4687 scope.go:117] "RemoveContainer" containerID="5b0090e754e862e3c3a96c9e0911ab964477142974a0441f6626a34602552301" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.334114 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-blj52" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.340611 4687 generic.go:334] "Generic (PLEG): container finished" podID="9a12135c-5a4e-490c-9a7a-2e741084697d" containerID="7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614" exitCode=137 Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.340658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" event={"ID":"9a12135c-5a4e-490c-9a7a-2e741084697d","Type":"ContainerDied","Data":"7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614"} Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.340684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" event={"ID":"9a12135c-5a4e-490c-9a7a-2e741084697d","Type":"ContainerDied","Data":"eae06805fc709d7593dd87080906b60715c5ebeb5f1c42f5ee1b4ee57d24527a"} Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.340628 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.345960 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" event={"ID":"d5585f46-790b-4a19-b276-9c185d49e5fb","Type":"ContainerDied","Data":"56dfa5b6267df4c585f8096b7773abd7afecafc226b76e822be672b486103263"} Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.345997 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.413209 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-http\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.413234 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a12135c-5a4e-490c-9a7a-2e741084697d-config\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.413246 4687 reconciler_common.go:293] "Volume detached for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9a12135c-5a4e-490c-9a7a-2e741084697d-logging-loki-query-frontend-grpc\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.413258 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hg7\" (UniqueName: \"kubernetes.io/projected/9a12135c-5a4e-490c-9a7a-2e741084697d-kube-api-access-h8hg7\") on node \"crc\" DevicePath \"\"" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.449275 4687 scope.go:117] "RemoveContainer" containerID="7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.453721 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5"] Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.467684 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7lkk5"] Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.482737 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2"] Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.492982 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-mm5q2"] Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.500574 4687 scope.go:117] "RemoveContainer" containerID="7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614" Mar 12 16:49:53 crc kubenswrapper[4687]: E0312 16:49:53.501064 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614\": container with ID starting with 7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614 not found: ID does not exist" containerID="7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.501094 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614"} err="failed to get container status \"7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614\": rpc error: code = NotFound desc = could not find container \"7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614\": container with ID starting with 7585b60e5c4624d3f39c73acd70945d06f6888f4584c6578ec3a8611c3c87614 not found: ID does not exist" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.501114 4687 scope.go:117] "RemoveContainer" containerID="541031a8847e15f19cb40a3dafd43ee51a05693a4687976728feaabd4b0eabd1" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.506223 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-blj52"] Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.516858 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-blj52"] Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.745567 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a12135c-5a4e-490c-9a7a-2e741084697d" path="/var/lib/kubelet/pods/9a12135c-5a4e-490c-9a7a-2e741084697d/volumes" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.746145 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" path="/var/lib/kubelet/pods/acd169c8-4496-4d03-a06b-de5cd486e7ed/volumes" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.746772 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" path="/var/lib/kubelet/pods/d5585f46-790b-4a19-b276-9c185d49e5fb/volumes" Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.778407 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 12 16:49:53 crc kubenswrapper[4687]: I0312 16:49:53.778484 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.161045 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555570-g9trj"] Mar 12 16:50:00 crc kubenswrapper[4687]: E0312 16:50:00.162262 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="extract-utilities" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162279 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="extract-utilities" Mar 12 16:50:00 crc kubenswrapper[4687]: E0312 16:50:00.162302 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162310 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" Mar 12 16:50:00 crc kubenswrapper[4687]: E0312 16:50:00.162341 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162349 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" Mar 12 16:50:00 crc kubenswrapper[4687]: E0312 16:50:00.162401 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="extract-content" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162411 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="extract-content" Mar 12 16:50:00 crc kubenswrapper[4687]: E0312 16:50:00.162442 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="registry-server" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162451 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="registry-server" Mar 12 16:50:00 crc kubenswrapper[4687]: E0312 16:50:00.162463 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a12135c-5a4e-490c-9a7a-2e741084697d" containerName="loki-query-frontend" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162471 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a12135c-5a4e-490c-9a7a-2e741084697d" containerName="loki-query-frontend" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162754 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd169c8-4496-4d03-a06b-de5cd486e7ed" containerName="loki-querier" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162775 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5585f46-790b-4a19-b276-9c185d49e5fb" containerName="loki-distributor" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162815 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a12135c-5a4e-490c-9a7a-2e741084697d" containerName="loki-query-frontend" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.162827 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="268cba67-b44a-4d37-8567-475c1fd27a20" containerName="registry-server" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.163930 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.169623 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.169873 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.170150 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.182759 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555570-g9trj"] Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.303812 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p78bh\" (UniqueName: \"kubernetes.io/projected/965c0c2e-6a1e-4492-b703-c3f4ecd3588e-kube-api-access-p78bh\") pod \"auto-csr-approver-29555570-g9trj\" (UID: \"965c0c2e-6a1e-4492-b703-c3f4ecd3588e\") " pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.407202 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p78bh\" (UniqueName: \"kubernetes.io/projected/965c0c2e-6a1e-4492-b703-c3f4ecd3588e-kube-api-access-p78bh\") pod \"auto-csr-approver-29555570-g9trj\" (UID: \"965c0c2e-6a1e-4492-b703-c3f4ecd3588e\") " pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.437308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p78bh\" (UniqueName: \"kubernetes.io/projected/965c0c2e-6a1e-4492-b703-c3f4ecd3588e-kube-api-access-p78bh\") pod \"auto-csr-approver-29555570-g9trj\" (UID: \"965c0c2e-6a1e-4492-b703-c3f4ecd3588e\") " pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:00 crc kubenswrapper[4687]: I0312 16:50:00.490905 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:01 crc kubenswrapper[4687]: I0312 16:50:01.038237 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555570-g9trj"] Mar 12 16:50:01 crc kubenswrapper[4687]: I0312 16:50:01.459433 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555570-g9trj" event={"ID":"965c0c2e-6a1e-4492-b703-c3f4ecd3588e","Type":"ContainerStarted","Data":"f3a11e8dd2558f743585bc7feacfe1552811678d5ad8655b8c4e7104c4ef753d"} Mar 12 16:50:03 crc kubenswrapper[4687]: I0312 16:50:03.604581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555570-g9trj" event={"ID":"965c0c2e-6a1e-4492-b703-c3f4ecd3588e","Type":"ContainerStarted","Data":"8a837248011ccdcf770cd26fa8f137de0a3eecf40daac2d712bf162e8347c3e7"} Mar 12 16:50:03 crc kubenswrapper[4687]: I0312 16:50:03.630177 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555570-g9trj" podStartSLOduration=2.183875615 podStartE2EDuration="3.630155245s" podCreationTimestamp="2026-03-12 16:50:00 +0000 UTC" firstStartedPulling="2026-03-12 16:50:01.04248318 +0000 UTC m=+2850.006445524" lastFinishedPulling="2026-03-12 16:50:02.48876281 +0000 UTC m=+2851.452725154" observedRunningTime="2026-03-12 16:50:03.622896402 +0000 UTC m=+2852.586858746" watchObservedRunningTime="2026-03-12 16:50:03.630155245 +0000 UTC m=+2852.594117609" Mar 12 16:50:03 crc kubenswrapper[4687]: I0312 16:50:03.778174 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 12 16:50:03 crc kubenswrapper[4687]: I0312 16:50:03.778231 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:50:04 crc kubenswrapper[4687]: I0312 16:50:04.626873 4687 generic.go:334] "Generic (PLEG): container finished" podID="965c0c2e-6a1e-4492-b703-c3f4ecd3588e" containerID="8a837248011ccdcf770cd26fa8f137de0a3eecf40daac2d712bf162e8347c3e7" exitCode=0 Mar 12 16:50:04 crc kubenswrapper[4687]: I0312 16:50:04.627020 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555570-g9trj" event={"ID":"965c0c2e-6a1e-4492-b703-c3f4ecd3588e","Type":"ContainerDied","Data":"8a837248011ccdcf770cd26fa8f137de0a3eecf40daac2d712bf162e8347c3e7"} Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.033857 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.057896 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p78bh\" (UniqueName: \"kubernetes.io/projected/965c0c2e-6a1e-4492-b703-c3f4ecd3588e-kube-api-access-p78bh\") pod \"965c0c2e-6a1e-4492-b703-c3f4ecd3588e\" (UID: \"965c0c2e-6a1e-4492-b703-c3f4ecd3588e\") " Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.063922 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965c0c2e-6a1e-4492-b703-c3f4ecd3588e-kube-api-access-p78bh" (OuterVolumeSpecName: "kube-api-access-p78bh") pod "965c0c2e-6a1e-4492-b703-c3f4ecd3588e" (UID: "965c0c2e-6a1e-4492-b703-c3f4ecd3588e"). InnerVolumeSpecName "kube-api-access-p78bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.161293 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p78bh\" (UniqueName: \"kubernetes.io/projected/965c0c2e-6a1e-4492-b703-c3f4ecd3588e-kube-api-access-p78bh\") on node \"crc\" DevicePath \"\"" Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.658990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555570-g9trj" event={"ID":"965c0c2e-6a1e-4492-b703-c3f4ecd3588e","Type":"ContainerDied","Data":"f3a11e8dd2558f743585bc7feacfe1552811678d5ad8655b8c4e7104c4ef753d"} Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.659032 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a11e8dd2558f743585bc7feacfe1552811678d5ad8655b8c4e7104c4ef753d" Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.659130 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555570-g9trj" Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.733192 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555564-sbwcx"] Mar 12 16:50:06 crc kubenswrapper[4687]: I0312 16:50:06.745153 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555564-sbwcx"] Mar 12 16:50:07 crc kubenswrapper[4687]: I0312 16:50:07.748224 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14dae95f-17a3-4f7f-96f4-e07c1713c61c" path="/var/lib/kubelet/pods/14dae95f-17a3-4f7f-96f4-e07c1713c61c/volumes" Mar 12 16:50:13 crc kubenswrapper[4687]: I0312 16:50:13.773232 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 12 16:50:13 crc kubenswrapper[4687]: I0312 16:50:13.773798 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 16:50:23 crc kubenswrapper[4687]: I0312 16:50:23.773071 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 12 16:50:34 crc kubenswrapper[4687]: I0312 16:50:34.458022 4687 scope.go:117] "RemoveContainer" containerID="c6bff18185ea0de94ca3908b722fe363155c60784879d6a0cc5d3def39c1a13c" Mar 12 16:50:44 crc kubenswrapper[4687]: I0312 16:50:44.121334 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:50:44 crc kubenswrapper[4687]: I0312 16:50:44.122661 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:51:03 crc kubenswrapper[4687]: I0312 16:51:03.359090 4687 generic.go:334] "Generic (PLEG): container finished" podID="72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" containerID="77095e59f42f9c018ebc8c51e90a79364986df3ff273a4fe9462cb6bc51d5e48" exitCode=0 Mar 12 16:51:03 crc kubenswrapper[4687]: I0312 16:51:03.359187 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" event={"ID":"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec","Type":"ContainerDied","Data":"77095e59f42f9c018ebc8c51e90a79364986df3ff273a4fe9462cb6bc51d5e48"} Mar 12 16:51:04 crc kubenswrapper[4687]: I0312 16:51:04.987781 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.065836 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-telemetry-power-monitoring-combined-ca-bundle\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.066161 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqchd\" (UniqueName: \"kubernetes.io/projected/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-kube-api-access-vqchd\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.067267 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-0\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.067564 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-1\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.067683 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ssh-key-openstack-edpm-ipam\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.067848 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-inventory\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.068243 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-2\") pod \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\" (UID: \"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec\") " Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.072242 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-kube-api-access-vqchd" (OuterVolumeSpecName: "kube-api-access-vqchd") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "kube-api-access-vqchd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.073687 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.100657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.103211 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.104170 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.107516 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-inventory" (OuterVolumeSpecName: "inventory") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.113841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" (UID: "72ce7c15-b60f-47fc-93d8-68d5dda4b9ec"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172102 4687 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172142 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqchd\" (UniqueName: \"kubernetes.io/projected/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-kube-api-access-vqchd\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172154 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172166 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172176 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172186 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.172193 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/72ce7c15-b60f-47fc-93d8-68d5dda4b9ec-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.389883 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" event={"ID":"72ce7c15-b60f-47fc-93d8-68d5dda4b9ec","Type":"ContainerDied","Data":"b998997eb8fba31abd3be13ecee48768aee9c20a2405566251819c8bbbab17df"} Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.389942 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b998997eb8fba31abd3be13ecee48768aee9c20a2405566251819c8bbbab17df" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.390087 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.490666 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt"] Mar 12 16:51:05 crc kubenswrapper[4687]: E0312 16:51:05.491943 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965c0c2e-6a1e-4492-b703-c3f4ecd3588e" containerName="oc" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.491991 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="965c0c2e-6a1e-4492-b703-c3f4ecd3588e" containerName="oc" Mar 12 16:51:05 crc kubenswrapper[4687]: E0312 16:51:05.492038 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.492060 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.492707 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="965c0c2e-6a1e-4492-b703-c3f4ecd3588e" containerName="oc" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.492759 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ce7c15-b60f-47fc-93d8-68d5dda4b9ec" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.494887 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.500767 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.500807 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.501025 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.501115 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.501140 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6l69v" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.503061 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt"] Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.581076 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.581147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.581253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnqv\" (UniqueName: \"kubernetes.io/projected/268b6164-14c8-4663-80be-5b81ea271407-kube-api-access-tvnqv\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.581346 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.581531 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.683774 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvnqv\" (UniqueName: \"kubernetes.io/projected/268b6164-14c8-4663-80be-5b81ea271407-kube-api-access-tvnqv\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.684145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.684280 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.684952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.684986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.689171 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.689849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.690578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.690659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.699659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvnqv\" (UniqueName: \"kubernetes.io/projected/268b6164-14c8-4663-80be-5b81ea271407-kube-api-access-tvnqv\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hh4pt\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:05 crc kubenswrapper[4687]: I0312 16:51:05.830433 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:06 crc kubenswrapper[4687]: I0312 16:51:06.408873 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:51:06 crc kubenswrapper[4687]: I0312 16:51:06.409656 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt"] Mar 12 16:51:07 crc kubenswrapper[4687]: I0312 16:51:07.419562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" event={"ID":"268b6164-14c8-4663-80be-5b81ea271407","Type":"ContainerStarted","Data":"60c451bfaefa3d5bdb4e8d74c571310070107b3b859125c92183a4f0f7b3c4f6"} Mar 12 16:51:07 crc kubenswrapper[4687]: I0312 16:51:07.420119 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" event={"ID":"268b6164-14c8-4663-80be-5b81ea271407","Type":"ContainerStarted","Data":"3d9d05e8031b2906d549eec1f98717e5b66780858220b1a10d17bdb1d7f4134d"} Mar 12 16:51:07 crc kubenswrapper[4687]: I0312 16:51:07.447780 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" podStartSLOduration=1.897501132 podStartE2EDuration="2.447750885s" podCreationTimestamp="2026-03-12 16:51:05 +0000 UTC" firstStartedPulling="2026-03-12 16:51:06.408628495 +0000 UTC m=+2915.372590839" lastFinishedPulling="2026-03-12 16:51:06.958878238 +0000 UTC m=+2915.922840592" observedRunningTime="2026-03-12 16:51:07.435760556 +0000 UTC m=+2916.399722930" watchObservedRunningTime="2026-03-12 16:51:07.447750885 +0000 UTC m=+2916.411713229" Mar 12 16:51:14 crc kubenswrapper[4687]: I0312 16:51:14.121811 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:51:14 crc kubenswrapper[4687]: I0312 16:51:14.122464 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:51:21 crc kubenswrapper[4687]: I0312 16:51:21.599065 4687 generic.go:334] "Generic (PLEG): container finished" podID="268b6164-14c8-4663-80be-5b81ea271407" containerID="60c451bfaefa3d5bdb4e8d74c571310070107b3b859125c92183a4f0f7b3c4f6" exitCode=0 Mar 12 16:51:21 crc kubenswrapper[4687]: I0312 16:51:21.599129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" event={"ID":"268b6164-14c8-4663-80be-5b81ea271407","Type":"ContainerDied","Data":"60c451bfaefa3d5bdb4e8d74c571310070107b3b859125c92183a4f0f7b3c4f6"} Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.062156 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.159523 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-inventory\") pod \"268b6164-14c8-4663-80be-5b81ea271407\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.159698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-ssh-key-openstack-edpm-ipam\") pod \"268b6164-14c8-4663-80be-5b81ea271407\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.159800 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-1\") pod \"268b6164-14c8-4663-80be-5b81ea271407\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.159861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvnqv\" (UniqueName: \"kubernetes.io/projected/268b6164-14c8-4663-80be-5b81ea271407-kube-api-access-tvnqv\") pod \"268b6164-14c8-4663-80be-5b81ea271407\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.159882 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-0\") pod \"268b6164-14c8-4663-80be-5b81ea271407\" (UID: \"268b6164-14c8-4663-80be-5b81ea271407\") " Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.166610 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268b6164-14c8-4663-80be-5b81ea271407-kube-api-access-tvnqv" (OuterVolumeSpecName: "kube-api-access-tvnqv") pod "268b6164-14c8-4663-80be-5b81ea271407" (UID: "268b6164-14c8-4663-80be-5b81ea271407"). InnerVolumeSpecName "kube-api-access-tvnqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.193695 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "268b6164-14c8-4663-80be-5b81ea271407" (UID: "268b6164-14c8-4663-80be-5b81ea271407"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.194292 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-inventory" (OuterVolumeSpecName: "inventory") pod "268b6164-14c8-4663-80be-5b81ea271407" (UID: "268b6164-14c8-4663-80be-5b81ea271407"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.194558 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "268b6164-14c8-4663-80be-5b81ea271407" (UID: "268b6164-14c8-4663-80be-5b81ea271407"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.196633 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "268b6164-14c8-4663-80be-5b81ea271407" (UID: "268b6164-14c8-4663-80be-5b81ea271407"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.263195 4687 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.263237 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvnqv\" (UniqueName: \"kubernetes.io/projected/268b6164-14c8-4663-80be-5b81ea271407-kube-api-access-tvnqv\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.263248 4687 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.263278 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.263292 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/268b6164-14c8-4663-80be-5b81ea271407-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.620319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" event={"ID":"268b6164-14c8-4663-80be-5b81ea271407","Type":"ContainerDied","Data":"3d9d05e8031b2906d549eec1f98717e5b66780858220b1a10d17bdb1d7f4134d"} Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.620638 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d9d05e8031b2906d549eec1f98717e5b66780858220b1a10d17bdb1d7f4134d" Mar 12 16:51:23 crc kubenswrapper[4687]: I0312 16:51:23.620475 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hh4pt" Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.121572 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.121978 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.122018 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.122823 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.122867 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" gracePeriod=600 Mar 12 16:51:44 crc kubenswrapper[4687]: E0312 16:51:44.257149 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.911595 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" exitCode=0 Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.912106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65"} Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.912237 4687 scope.go:117] "RemoveContainer" containerID="46abef41881c9a6f28882d6a10c8f0678dc2602721fd1e31fad6112aecd70e2a" Mar 12 16:51:44 crc kubenswrapper[4687]: I0312 16:51:44.913131 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:51:44 crc kubenswrapper[4687]: E0312 16:51:44.913504 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.148625 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555572-n72kt"] Mar 12 16:52:00 crc kubenswrapper[4687]: E0312 16:52:00.150216 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268b6164-14c8-4663-80be-5b81ea271407" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.150236 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="268b6164-14c8-4663-80be-5b81ea271407" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.150573 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="268b6164-14c8-4663-80be-5b81ea271407" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.151639 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.155232 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.155342 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.155611 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.165912 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555572-n72kt"] Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.313692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ps7b\" (UniqueName: \"kubernetes.io/projected/2bf66a23-2621-47b0-b006-0d9b0f21b0a5-kube-api-access-4ps7b\") pod \"auto-csr-approver-29555572-n72kt\" (UID: \"2bf66a23-2621-47b0-b006-0d9b0f21b0a5\") " pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.416402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ps7b\" (UniqueName: \"kubernetes.io/projected/2bf66a23-2621-47b0-b006-0d9b0f21b0a5-kube-api-access-4ps7b\") pod \"auto-csr-approver-29555572-n72kt\" (UID: \"2bf66a23-2621-47b0-b006-0d9b0f21b0a5\") " pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.442148 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ps7b\" (UniqueName: \"kubernetes.io/projected/2bf66a23-2621-47b0-b006-0d9b0f21b0a5-kube-api-access-4ps7b\") pod \"auto-csr-approver-29555572-n72kt\" (UID: \"2bf66a23-2621-47b0-b006-0d9b0f21b0a5\") " pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.470050 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.732432 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:52:00 crc kubenswrapper[4687]: E0312 16:52:00.733090 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:52:00 crc kubenswrapper[4687]: I0312 16:52:00.959436 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555572-n72kt"] Mar 12 16:52:01 crc kubenswrapper[4687]: I0312 16:52:01.086265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555572-n72kt" event={"ID":"2bf66a23-2621-47b0-b006-0d9b0f21b0a5","Type":"ContainerStarted","Data":"59dd71194c27a5e31f654ecfc4dbcdf14d4448edac504f9fdb8f057bb4f564da"} Mar 12 16:52:03 crc kubenswrapper[4687]: I0312 16:52:03.113147 4687 generic.go:334] "Generic (PLEG): container finished" podID="2bf66a23-2621-47b0-b006-0d9b0f21b0a5" containerID="1a374b06560de530d622e48c9db58ef1a8feedf49e3f0b0236d4d4fd391f5a30" exitCode=0 Mar 12 16:52:03 crc kubenswrapper[4687]: I0312 16:52:03.113510 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555572-n72kt" event={"ID":"2bf66a23-2621-47b0-b006-0d9b0f21b0a5","Type":"ContainerDied","Data":"1a374b06560de530d622e48c9db58ef1a8feedf49e3f0b0236d4d4fd391f5a30"} Mar 12 16:52:04 crc kubenswrapper[4687]: I0312 16:52:04.569477 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:04 crc kubenswrapper[4687]: I0312 16:52:04.728428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ps7b\" (UniqueName: \"kubernetes.io/projected/2bf66a23-2621-47b0-b006-0d9b0f21b0a5-kube-api-access-4ps7b\") pod \"2bf66a23-2621-47b0-b006-0d9b0f21b0a5\" (UID: \"2bf66a23-2621-47b0-b006-0d9b0f21b0a5\") " Mar 12 16:52:04 crc kubenswrapper[4687]: I0312 16:52:04.737248 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bf66a23-2621-47b0-b006-0d9b0f21b0a5-kube-api-access-4ps7b" (OuterVolumeSpecName: "kube-api-access-4ps7b") pod "2bf66a23-2621-47b0-b006-0d9b0f21b0a5" (UID: "2bf66a23-2621-47b0-b006-0d9b0f21b0a5"). InnerVolumeSpecName "kube-api-access-4ps7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:52:04 crc kubenswrapper[4687]: I0312 16:52:04.832313 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ps7b\" (UniqueName: \"kubernetes.io/projected/2bf66a23-2621-47b0-b006-0d9b0f21b0a5-kube-api-access-4ps7b\") on node \"crc\" DevicePath \"\"" Mar 12 16:52:05 crc kubenswrapper[4687]: I0312 16:52:05.138899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555572-n72kt" event={"ID":"2bf66a23-2621-47b0-b006-0d9b0f21b0a5","Type":"ContainerDied","Data":"59dd71194c27a5e31f654ecfc4dbcdf14d4448edac504f9fdb8f057bb4f564da"} Mar 12 16:52:05 crc kubenswrapper[4687]: I0312 16:52:05.138955 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59dd71194c27a5e31f654ecfc4dbcdf14d4448edac504f9fdb8f057bb4f564da" Mar 12 16:52:05 crc kubenswrapper[4687]: I0312 16:52:05.138981 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555572-n72kt" Mar 12 16:52:05 crc kubenswrapper[4687]: I0312 16:52:05.657557 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555566-s4gkh"] Mar 12 16:52:05 crc kubenswrapper[4687]: I0312 16:52:05.674312 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555566-s4gkh"] Mar 12 16:52:05 crc kubenswrapper[4687]: I0312 16:52:05.760218 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b163ef-54ed-47b4-8284-93ffc647b4ef" path="/var/lib/kubelet/pods/c8b163ef-54ed-47b4-8284-93ffc647b4ef/volumes" Mar 12 16:52:14 crc kubenswrapper[4687]: I0312 16:52:14.734398 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:52:14 crc kubenswrapper[4687]: E0312 16:52:14.735817 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:52:27 crc kubenswrapper[4687]: I0312 16:52:27.733148 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:52:27 crc kubenswrapper[4687]: E0312 16:52:27.733810 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:52:34 crc kubenswrapper[4687]: I0312 16:52:34.642790 4687 scope.go:117] "RemoveContainer" containerID="ac3a666a1c8ef15974d032fd72be1493e926810603be3ec69e85b953e004b986" Mar 12 16:52:38 crc kubenswrapper[4687]: I0312 16:52:38.733669 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:52:38 crc kubenswrapper[4687]: E0312 16:52:38.734773 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:52:53 crc kubenswrapper[4687]: I0312 16:52:53.735504 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:52:53 crc kubenswrapper[4687]: E0312 16:52:53.739431 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:53:07 crc kubenswrapper[4687]: I0312 16:53:07.733382 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:53:07 crc kubenswrapper[4687]: E0312 16:53:07.734240 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:53:21 crc kubenswrapper[4687]: I0312 16:53:21.742010 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:53:21 crc kubenswrapper[4687]: E0312 16:53:21.742736 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:53:33 crc kubenswrapper[4687]: I0312 16:53:33.734186 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:53:33 crc kubenswrapper[4687]: E0312 16:53:33.735235 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:53:46 crc kubenswrapper[4687]: I0312 16:53:46.733888 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:53:46 crc kubenswrapper[4687]: E0312 16:53:46.734616 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:53:57 crc kubenswrapper[4687]: I0312 16:53:57.733847 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:53:57 crc kubenswrapper[4687]: E0312 16:53:57.735354 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.151522 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555574-p4qbn"] Mar 12 16:54:00 crc kubenswrapper[4687]: E0312 16:54:00.152603 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf66a23-2621-47b0-b006-0d9b0f21b0a5" containerName="oc" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.152617 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf66a23-2621-47b0-b006-0d9b0f21b0a5" containerName="oc" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.152878 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf66a23-2621-47b0-b006-0d9b0f21b0a5" containerName="oc" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.153696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.156863 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.157138 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.158444 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.221656 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555574-p4qbn"] Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.238863 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8fh\" (UniqueName: \"kubernetes.io/projected/e1c16aa9-7193-437a-9ce1-bffc3e52887f-kube-api-access-cn8fh\") pod \"auto-csr-approver-29555574-p4qbn\" (UID: \"e1c16aa9-7193-437a-9ce1-bffc3e52887f\") " pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.341642 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8fh\" (UniqueName: \"kubernetes.io/projected/e1c16aa9-7193-437a-9ce1-bffc3e52887f-kube-api-access-cn8fh\") pod \"auto-csr-approver-29555574-p4qbn\" (UID: \"e1c16aa9-7193-437a-9ce1-bffc3e52887f\") " pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.366658 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8fh\" (UniqueName: \"kubernetes.io/projected/e1c16aa9-7193-437a-9ce1-bffc3e52887f-kube-api-access-cn8fh\") pod \"auto-csr-approver-29555574-p4qbn\" (UID: \"e1c16aa9-7193-437a-9ce1-bffc3e52887f\") " pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.473386 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:00 crc kubenswrapper[4687]: I0312 16:54:00.974041 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555574-p4qbn"] Mar 12 16:54:01 crc kubenswrapper[4687]: I0312 16:54:01.530401 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" event={"ID":"e1c16aa9-7193-437a-9ce1-bffc3e52887f","Type":"ContainerStarted","Data":"a2fed038a49c05e3672553265ef1f50ffbac978d3d4b0e245377797e0054df74"} Mar 12 16:54:02 crc kubenswrapper[4687]: I0312 16:54:02.545978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" event={"ID":"e1c16aa9-7193-437a-9ce1-bffc3e52887f","Type":"ContainerStarted","Data":"347eacd1677f6a7eb33ab5c5d07f82b9cbeed62bc73c90d8e79dde904122e552"} Mar 12 16:54:02 crc kubenswrapper[4687]: I0312 16:54:02.578724 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" podStartSLOduration=1.449702178 podStartE2EDuration="2.578702943s" podCreationTimestamp="2026-03-12 16:54:00 +0000 UTC" firstStartedPulling="2026-03-12 16:54:00.980170826 +0000 UTC m=+3089.944133170" lastFinishedPulling="2026-03-12 16:54:02.109171551 +0000 UTC m=+3091.073133935" observedRunningTime="2026-03-12 16:54:02.561579317 +0000 UTC m=+3091.525541661" watchObservedRunningTime="2026-03-12 16:54:02.578702943 +0000 UTC m=+3091.542665287" Mar 12 16:54:03 crc kubenswrapper[4687]: I0312 16:54:03.556653 4687 generic.go:334] "Generic (PLEG): container finished" podID="e1c16aa9-7193-437a-9ce1-bffc3e52887f" containerID="347eacd1677f6a7eb33ab5c5d07f82b9cbeed62bc73c90d8e79dde904122e552" exitCode=0 Mar 12 16:54:03 crc kubenswrapper[4687]: I0312 16:54:03.556720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" event={"ID":"e1c16aa9-7193-437a-9ce1-bffc3e52887f","Type":"ContainerDied","Data":"347eacd1677f6a7eb33ab5c5d07f82b9cbeed62bc73c90d8e79dde904122e552"} Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.024675 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.088959 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8fh\" (UniqueName: \"kubernetes.io/projected/e1c16aa9-7193-437a-9ce1-bffc3e52887f-kube-api-access-cn8fh\") pod \"e1c16aa9-7193-437a-9ce1-bffc3e52887f\" (UID: \"e1c16aa9-7193-437a-9ce1-bffc3e52887f\") " Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.095866 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c16aa9-7193-437a-9ce1-bffc3e52887f-kube-api-access-cn8fh" (OuterVolumeSpecName: "kube-api-access-cn8fh") pod "e1c16aa9-7193-437a-9ce1-bffc3e52887f" (UID: "e1c16aa9-7193-437a-9ce1-bffc3e52887f"). InnerVolumeSpecName "kube-api-access-cn8fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.135630 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnh8x"] Mar 12 16:54:05 crc kubenswrapper[4687]: E0312 16:54:05.136353 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c16aa9-7193-437a-9ce1-bffc3e52887f" containerName="oc" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.136395 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c16aa9-7193-437a-9ce1-bffc3e52887f" containerName="oc" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.136696 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c16aa9-7193-437a-9ce1-bffc3e52887f" containerName="oc" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.138898 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.154529 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnh8x"] Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.191506 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-utilities\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.191615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whntm\" (UniqueName: \"kubernetes.io/projected/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-kube-api-access-whntm\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.192227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-catalog-content\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.192491 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8fh\" (UniqueName: \"kubernetes.io/projected/e1c16aa9-7193-437a-9ce1-bffc3e52887f-kube-api-access-cn8fh\") on node \"crc\" DevicePath \"\"" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.295751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-catalog-content\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.295830 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-utilities\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.295861 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whntm\" (UniqueName: \"kubernetes.io/projected/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-kube-api-access-whntm\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.296577 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-catalog-content\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.296659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-utilities\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.312872 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whntm\" (UniqueName: \"kubernetes.io/projected/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-kube-api-access-whntm\") pod \"redhat-marketplace-xnh8x\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.480496 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.580455 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" event={"ID":"e1c16aa9-7193-437a-9ce1-bffc3e52887f","Type":"ContainerDied","Data":"a2fed038a49c05e3672553265ef1f50ffbac978d3d4b0e245377797e0054df74"} Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.580796 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2fed038a49c05e3672553265ef1f50ffbac978d3d4b0e245377797e0054df74" Mar 12 16:54:05 crc kubenswrapper[4687]: I0312 16:54:05.580866 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555574-p4qbn" Mar 12 16:54:05 crc kubenswrapper[4687]: W0312 16:54:05.999507 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf28e13b_eddb_4bc9_a1a4_4ab43d4330d2.slice/crio-ab9cd9aefd0173f296967bac3c681560dc47d8665f91a643b1a5464816e872d8 WatchSource:0}: Error finding container ab9cd9aefd0173f296967bac3c681560dc47d8665f91a643b1a5464816e872d8: Status 404 returned error can't find the container with id ab9cd9aefd0173f296967bac3c681560dc47d8665f91a643b1a5464816e872d8 Mar 12 16:54:06 crc kubenswrapper[4687]: I0312 16:54:06.001566 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnh8x"] Mar 12 16:54:06 crc kubenswrapper[4687]: I0312 16:54:06.099111 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555568-qcbw9"] Mar 12 16:54:06 crc kubenswrapper[4687]: I0312 16:54:06.109494 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555568-qcbw9"] Mar 12 16:54:06 crc kubenswrapper[4687]: I0312 16:54:06.591919 4687 generic.go:334] "Generic (PLEG): container finished" podID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerID="be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d" exitCode=0 Mar 12 16:54:06 crc kubenswrapper[4687]: I0312 16:54:06.591965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerDied","Data":"be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d"} Mar 12 16:54:06 crc kubenswrapper[4687]: I0312 16:54:06.592173 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerStarted","Data":"ab9cd9aefd0173f296967bac3c681560dc47d8665f91a643b1a5464816e872d8"} Mar 12 16:54:07 crc kubenswrapper[4687]: I0312 16:54:07.766648 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddcfa53-3e74-447c-b250-0211d7bb9e2d" path="/var/lib/kubelet/pods/5ddcfa53-3e74-447c-b250-0211d7bb9e2d/volumes" Mar 12 16:54:08 crc kubenswrapper[4687]: I0312 16:54:08.616096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerStarted","Data":"8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c"} Mar 12 16:54:08 crc kubenswrapper[4687]: I0312 16:54:08.732740 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:54:08 crc kubenswrapper[4687]: E0312 16:54:08.732996 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:54:09 crc kubenswrapper[4687]: I0312 16:54:09.627064 4687 generic.go:334] "Generic (PLEG): container finished" podID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerID="8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c" exitCode=0 Mar 12 16:54:09 crc kubenswrapper[4687]: I0312 16:54:09.627204 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerDied","Data":"8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c"} Mar 12 16:54:10 crc kubenswrapper[4687]: I0312 16:54:10.650063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerStarted","Data":"416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd"} Mar 12 16:54:10 crc kubenswrapper[4687]: I0312 16:54:10.672245 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnh8x" podStartSLOduration=2.222496728 podStartE2EDuration="5.672228625s" podCreationTimestamp="2026-03-12 16:54:05 +0000 UTC" firstStartedPulling="2026-03-12 16:54:06.594557246 +0000 UTC m=+3095.558519590" lastFinishedPulling="2026-03-12 16:54:10.044289143 +0000 UTC m=+3099.008251487" observedRunningTime="2026-03-12 16:54:10.669327988 +0000 UTC m=+3099.633290332" watchObservedRunningTime="2026-03-12 16:54:10.672228625 +0000 UTC m=+3099.636190969" Mar 12 16:54:15 crc kubenswrapper[4687]: I0312 16:54:15.480815 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:15 crc kubenswrapper[4687]: I0312 16:54:15.481469 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:15 crc kubenswrapper[4687]: I0312 16:54:15.554299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:15 crc kubenswrapper[4687]: I0312 16:54:15.766489 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:15 crc kubenswrapper[4687]: I0312 16:54:15.841076 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnh8x"] Mar 12 16:54:17 crc kubenswrapper[4687]: I0312 16:54:17.740951 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xnh8x" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="registry-server" containerID="cri-o://416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd" gracePeriod=2 Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.315478 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.357983 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-utilities\") pod \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.358207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whntm\" (UniqueName: \"kubernetes.io/projected/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-kube-api-access-whntm\") pod \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.358260 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-catalog-content\") pod \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\" (UID: \"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2\") " Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.359267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-utilities" (OuterVolumeSpecName: "utilities") pod "bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" (UID: "bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.370424 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-kube-api-access-whntm" (OuterVolumeSpecName: "kube-api-access-whntm") pod "bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" (UID: "bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2"). InnerVolumeSpecName "kube-api-access-whntm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.390577 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" (UID: "bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.461445 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.461619 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whntm\" (UniqueName: \"kubernetes.io/projected/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-kube-api-access-whntm\") on node \"crc\" DevicePath \"\"" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.461676 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.755805 4687 generic.go:334] "Generic (PLEG): container finished" podID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerID="416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd" exitCode=0 Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.755841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerDied","Data":"416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd"} Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.755865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnh8x" event={"ID":"bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2","Type":"ContainerDied","Data":"ab9cd9aefd0173f296967bac3c681560dc47d8665f91a643b1a5464816e872d8"} Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.755881 4687 scope.go:117] "RemoveContainer" containerID="416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.756012 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnh8x" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.824788 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnh8x"] Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.823731 4687 scope.go:117] "RemoveContainer" containerID="8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.837502 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnh8x"] Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.849712 4687 scope.go:117] "RemoveContainer" containerID="be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.916888 4687 scope.go:117] "RemoveContainer" containerID="416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd" Mar 12 16:54:18 crc kubenswrapper[4687]: E0312 16:54:18.917774 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd\": container with ID starting with 416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd not found: ID does not exist" containerID="416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.917835 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd"} err="failed to get container status \"416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd\": rpc error: code = NotFound desc = could not find container \"416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd\": container with ID starting with 416c3b8ab8a298e7c4124ba3b5da4febc7c3f26dc400b42ff21550a2b3e72ecd not found: ID does not exist" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.917864 4687 scope.go:117] "RemoveContainer" containerID="8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c" Mar 12 16:54:18 crc kubenswrapper[4687]: E0312 16:54:18.918246 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c\": container with ID starting with 8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c not found: ID does not exist" containerID="8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.918297 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c"} err="failed to get container status \"8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c\": rpc error: code = NotFound desc = could not find container \"8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c\": container with ID starting with 8bad3a0d9b4c0e3af440485d28b97d1209d678e8dc09375a4c858ee4eb56693c not found: ID does not exist" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.918329 4687 scope.go:117] "RemoveContainer" containerID="be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d" Mar 12 16:54:18 crc kubenswrapper[4687]: E0312 16:54:18.918661 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d\": container with ID starting with be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d not found: ID does not exist" containerID="be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d" Mar 12 16:54:18 crc kubenswrapper[4687]: I0312 16:54:18.918691 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d"} err="failed to get container status \"be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d\": rpc error: code = NotFound desc = could not find container \"be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d\": container with ID starting with be26abac5250ec2ae0cefef181a771ff54571151e767844e544e646e6e601c9d not found: ID does not exist" Mar 12 16:54:19 crc kubenswrapper[4687]: I0312 16:54:19.748546 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" path="/var/lib/kubelet/pods/bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2/volumes" Mar 12 16:54:21 crc kubenswrapper[4687]: I0312 16:54:21.740780 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:54:21 crc kubenswrapper[4687]: E0312 16:54:21.741199 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:54:34 crc kubenswrapper[4687]: I0312 16:54:34.766192 4687 scope.go:117] "RemoveContainer" containerID="2188dd9ff0425ba50c50600c3a1cbbf3efe154de0d3741e4f9c1f04067bff7b5" Mar 12 16:54:35 crc kubenswrapper[4687]: I0312 16:54:35.733175 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:54:35 crc kubenswrapper[4687]: E0312 16:54:35.734195 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:54:47 crc kubenswrapper[4687]: I0312 16:54:47.733702 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:54:47 crc kubenswrapper[4687]: E0312 16:54:47.734557 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:54:58 crc kubenswrapper[4687]: I0312 16:54:58.735035 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:54:58 crc kubenswrapper[4687]: E0312 16:54:58.735707 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:55:13 crc kubenswrapper[4687]: I0312 16:55:13.734309 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:55:13 crc kubenswrapper[4687]: E0312 16:55:13.735340 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:55:27 crc kubenswrapper[4687]: I0312 16:55:27.733145 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:55:27 crc kubenswrapper[4687]: E0312 16:55:27.733805 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:55:38 crc kubenswrapper[4687]: I0312 16:55:38.732914 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:55:38 crc kubenswrapper[4687]: E0312 16:55:38.733723 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:55:53 crc kubenswrapper[4687]: I0312 16:55:53.732808 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:55:53 crc kubenswrapper[4687]: E0312 16:55:53.733557 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.201267 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555576-hftml"] Mar 12 16:56:00 crc kubenswrapper[4687]: E0312 16:56:00.202398 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="extract-content" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.202413 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="extract-content" Mar 12 16:56:00 crc kubenswrapper[4687]: E0312 16:56:00.202434 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="registry-server" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.202442 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="registry-server" Mar 12 16:56:00 crc kubenswrapper[4687]: E0312 16:56:00.202484 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="extract-utilities" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.202493 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="extract-utilities" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.202760 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf28e13b-eddb-4bc9-a1a4-4ab43d4330d2" containerName="registry-server" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.203744 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.206452 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.206823 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.206980 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.214255 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555576-hftml"] Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.237520 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrks\" (UniqueName: \"kubernetes.io/projected/e12d70c4-48a4-4c6d-8579-613fb070ec89-kube-api-access-gdrks\") pod \"auto-csr-approver-29555576-hftml\" (UID: \"e12d70c4-48a4-4c6d-8579-613fb070ec89\") " pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.339184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdrks\" (UniqueName: \"kubernetes.io/projected/e12d70c4-48a4-4c6d-8579-613fb070ec89-kube-api-access-gdrks\") pod \"auto-csr-approver-29555576-hftml\" (UID: \"e12d70c4-48a4-4c6d-8579-613fb070ec89\") " pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.374631 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdrks\" (UniqueName: \"kubernetes.io/projected/e12d70c4-48a4-4c6d-8579-613fb070ec89-kube-api-access-gdrks\") pod \"auto-csr-approver-29555576-hftml\" (UID: \"e12d70c4-48a4-4c6d-8579-613fb070ec89\") " pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:00 crc kubenswrapper[4687]: I0312 16:56:00.532602 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:01 crc kubenswrapper[4687]: I0312 16:56:01.027058 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555576-hftml"] Mar 12 16:56:01 crc kubenswrapper[4687]: W0312 16:56:01.032297 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode12d70c4_48a4_4c6d_8579_613fb070ec89.slice/crio-4270bf1d157b9bc00d78879cc2ee6ab2b6810f7131ab8f4e6499196c9212bb35 WatchSource:0}: Error finding container 4270bf1d157b9bc00d78879cc2ee6ab2b6810f7131ab8f4e6499196c9212bb35: Status 404 returned error can't find the container with id 4270bf1d157b9bc00d78879cc2ee6ab2b6810f7131ab8f4e6499196c9212bb35 Mar 12 16:56:01 crc kubenswrapper[4687]: I0312 16:56:01.970485 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555576-hftml" event={"ID":"e12d70c4-48a4-4c6d-8579-613fb070ec89","Type":"ContainerStarted","Data":"4270bf1d157b9bc00d78879cc2ee6ab2b6810f7131ab8f4e6499196c9212bb35"} Mar 12 16:56:02 crc kubenswrapper[4687]: I0312 16:56:02.996426 4687 generic.go:334] "Generic (PLEG): container finished" podID="e12d70c4-48a4-4c6d-8579-613fb070ec89" containerID="b34424c6e068c49a887ed2309e492ef36491210f0a9382b0fb1203be797b536d" exitCode=0 Mar 12 16:56:02 crc kubenswrapper[4687]: I0312 16:56:02.997853 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555576-hftml" event={"ID":"e12d70c4-48a4-4c6d-8579-613fb070ec89","Type":"ContainerDied","Data":"b34424c6e068c49a887ed2309e492ef36491210f0a9382b0fb1203be797b536d"} Mar 12 16:56:04 crc kubenswrapper[4687]: I0312 16:56:04.412762 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:04 crc kubenswrapper[4687]: I0312 16:56:04.440174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdrks\" (UniqueName: \"kubernetes.io/projected/e12d70c4-48a4-4c6d-8579-613fb070ec89-kube-api-access-gdrks\") pod \"e12d70c4-48a4-4c6d-8579-613fb070ec89\" (UID: \"e12d70c4-48a4-4c6d-8579-613fb070ec89\") " Mar 12 16:56:04 crc kubenswrapper[4687]: I0312 16:56:04.453691 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12d70c4-48a4-4c6d-8579-613fb070ec89-kube-api-access-gdrks" (OuterVolumeSpecName: "kube-api-access-gdrks") pod "e12d70c4-48a4-4c6d-8579-613fb070ec89" (UID: "e12d70c4-48a4-4c6d-8579-613fb070ec89"). InnerVolumeSpecName "kube-api-access-gdrks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:56:04 crc kubenswrapper[4687]: I0312 16:56:04.543484 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdrks\" (UniqueName: \"kubernetes.io/projected/e12d70c4-48a4-4c6d-8579-613fb070ec89-kube-api-access-gdrks\") on node \"crc\" DevicePath \"\"" Mar 12 16:56:04 crc kubenswrapper[4687]: I0312 16:56:04.732906 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:56:04 crc kubenswrapper[4687]: E0312 16:56:04.733210 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:56:05 crc kubenswrapper[4687]: I0312 16:56:05.020454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555576-hftml" event={"ID":"e12d70c4-48a4-4c6d-8579-613fb070ec89","Type":"ContainerDied","Data":"4270bf1d157b9bc00d78879cc2ee6ab2b6810f7131ab8f4e6499196c9212bb35"} Mar 12 16:56:05 crc kubenswrapper[4687]: I0312 16:56:05.020828 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4270bf1d157b9bc00d78879cc2ee6ab2b6810f7131ab8f4e6499196c9212bb35" Mar 12 16:56:05 crc kubenswrapper[4687]: I0312 16:56:05.020536 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555576-hftml" Mar 12 16:56:05 crc kubenswrapper[4687]: I0312 16:56:05.492733 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555570-g9trj"] Mar 12 16:56:05 crc kubenswrapper[4687]: I0312 16:56:05.503194 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555570-g9trj"] Mar 12 16:56:05 crc kubenswrapper[4687]: I0312 16:56:05.749743 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965c0c2e-6a1e-4492-b703-c3f4ecd3588e" path="/var/lib/kubelet/pods/965c0c2e-6a1e-4492-b703-c3f4ecd3588e/volumes" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.321128 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-56g8d"] Mar 12 16:56:19 crc kubenswrapper[4687]: E0312 16:56:19.322421 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d70c4-48a4-4c6d-8579-613fb070ec89" containerName="oc" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.322436 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d70c4-48a4-4c6d-8579-613fb070ec89" containerName="oc" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.322710 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d70c4-48a4-4c6d-8579-613fb070ec89" containerName="oc" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.324668 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.334433 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56g8d"] Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.470783 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxcf8\" (UniqueName: \"kubernetes.io/projected/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-kube-api-access-sxcf8\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.471184 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-utilities\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.471383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-catalog-content\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.574213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxcf8\" (UniqueName: \"kubernetes.io/projected/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-kube-api-access-sxcf8\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.574349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-utilities\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.574505 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-catalog-content\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.575115 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-catalog-content\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.575153 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-utilities\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.596546 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxcf8\" (UniqueName: \"kubernetes.io/projected/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-kube-api-access-sxcf8\") pod \"certified-operators-56g8d\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.649319 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:19 crc kubenswrapper[4687]: I0312 16:56:19.733474 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:56:19 crc kubenswrapper[4687]: E0312 16:56:19.733954 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:56:20 crc kubenswrapper[4687]: I0312 16:56:20.197533 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56g8d"] Mar 12 16:56:20 crc kubenswrapper[4687]: W0312 16:56:20.205659 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4022aa7e_ecc3_4cc3_ae84_ed897cf4042c.slice/crio-a4ab04bb3d1844cb00833095d07de46a03ab6699d9db9434319017a2421b0b55 WatchSource:0}: Error finding container a4ab04bb3d1844cb00833095d07de46a03ab6699d9db9434319017a2421b0b55: Status 404 returned error can't find the container with id a4ab04bb3d1844cb00833095d07de46a03ab6699d9db9434319017a2421b0b55 Mar 12 16:56:21 crc kubenswrapper[4687]: I0312 16:56:21.211680 4687 generic.go:334] "Generic (PLEG): container finished" podID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerID="855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1" exitCode=0 Mar 12 16:56:21 crc kubenswrapper[4687]: I0312 16:56:21.211984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerDied","Data":"855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1"} Mar 12 16:56:21 crc kubenswrapper[4687]: I0312 16:56:21.212016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerStarted","Data":"a4ab04bb3d1844cb00833095d07de46a03ab6699d9db9434319017a2421b0b55"} Mar 12 16:56:21 crc kubenswrapper[4687]: I0312 16:56:21.217662 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 16:56:23 crc kubenswrapper[4687]: I0312 16:56:23.234787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerStarted","Data":"509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a"} Mar 12 16:56:25 crc kubenswrapper[4687]: I0312 16:56:25.269136 4687 generic.go:334] "Generic (PLEG): container finished" podID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerID="509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a" exitCode=0 Mar 12 16:56:25 crc kubenswrapper[4687]: I0312 16:56:25.269232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerDied","Data":"509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a"} Mar 12 16:56:26 crc kubenswrapper[4687]: I0312 16:56:26.304117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerStarted","Data":"32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a"} Mar 12 16:56:26 crc kubenswrapper[4687]: I0312 16:56:26.341921 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-56g8d" podStartSLOduration=2.828106221 podStartE2EDuration="7.341903901s" podCreationTimestamp="2026-03-12 16:56:19 +0000 UTC" firstStartedPulling="2026-03-12 16:56:21.217337586 +0000 UTC m=+3230.181299930" lastFinishedPulling="2026-03-12 16:56:25.731135266 +0000 UTC m=+3234.695097610" observedRunningTime="2026-03-12 16:56:26.33454224 +0000 UTC m=+3235.298504584" watchObservedRunningTime="2026-03-12 16:56:26.341903901 +0000 UTC m=+3235.305866245" Mar 12 16:56:29 crc kubenswrapper[4687]: I0312 16:56:29.650265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:29 crc kubenswrapper[4687]: I0312 16:56:29.650746 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:30 crc kubenswrapper[4687]: I0312 16:56:30.699755 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-56g8d" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="registry-server" probeResult="failure" output=< Mar 12 16:56:30 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:56:30 crc kubenswrapper[4687]: > Mar 12 16:56:30 crc kubenswrapper[4687]: I0312 16:56:30.733349 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:56:30 crc kubenswrapper[4687]: E0312 16:56:30.733797 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 16:56:34 crc kubenswrapper[4687]: I0312 16:56:34.926135 4687 scope.go:117] "RemoveContainer" containerID="8a837248011ccdcf770cd26fa8f137de0a3eecf40daac2d712bf162e8347c3e7" Mar 12 16:56:39 crc kubenswrapper[4687]: I0312 16:56:39.708606 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:39 crc kubenswrapper[4687]: I0312 16:56:39.786821 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:45 crc kubenswrapper[4687]: I0312 16:56:45.736337 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 16:56:46 crc kubenswrapper[4687]: I0312 16:56:46.505602 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"9433878e3a9622ce6db53fae0bb4d8ab03ad8236b3772b172987a71ca458659e"} Mar 12 16:56:47 crc kubenswrapper[4687]: I0312 16:56:47.895911 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56g8d"] Mar 12 16:56:47 crc kubenswrapper[4687]: I0312 16:56:47.896528 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-56g8d" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="registry-server" containerID="cri-o://32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a" gracePeriod=2 Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.499931 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.594311 4687 generic.go:334] "Generic (PLEG): container finished" podID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerID="32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a" exitCode=0 Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.594387 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerDied","Data":"32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a"} Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.594437 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56g8d" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.594467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56g8d" event={"ID":"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c","Type":"ContainerDied","Data":"a4ab04bb3d1844cb00833095d07de46a03ab6699d9db9434319017a2421b0b55"} Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.594494 4687 scope.go:117] "RemoveContainer" containerID="32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.612504 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxcf8\" (UniqueName: \"kubernetes.io/projected/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-kube-api-access-sxcf8\") pod \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.612573 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-utilities\") pod \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.612802 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-catalog-content\") pod \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\" (UID: \"4022aa7e-ecc3-4cc3-ae84-ed897cf4042c\") " Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.613550 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-utilities" (OuterVolumeSpecName: "utilities") pod "4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" (UID: "4022aa7e-ecc3-4cc3-ae84-ed897cf4042c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.619234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-kube-api-access-sxcf8" (OuterVolumeSpecName: "kube-api-access-sxcf8") pod "4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" (UID: "4022aa7e-ecc3-4cc3-ae84-ed897cf4042c"). InnerVolumeSpecName "kube-api-access-sxcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.629712 4687 scope.go:117] "RemoveContainer" containerID="509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.691219 4687 scope.go:117] "RemoveContainer" containerID="855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.706063 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" (UID: "4022aa7e-ecc3-4cc3-ae84-ed897cf4042c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.717437 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxcf8\" (UniqueName: \"kubernetes.io/projected/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-kube-api-access-sxcf8\") on node \"crc\" DevicePath \"\"" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.717482 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.717496 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.749604 4687 scope.go:117] "RemoveContainer" containerID="32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a" Mar 12 16:56:48 crc kubenswrapper[4687]: E0312 16:56:48.750401 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a\": container with ID starting with 32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a not found: ID does not exist" containerID="32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.750470 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a"} err="failed to get container status \"32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a\": rpc error: code = NotFound desc = could not find container \"32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a\": container with ID starting with 32ac0fa26e4276b032b1d5ef6be1a82f21cb0221b9af941d6818629c41206a2a not found: ID does not exist" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.750503 4687 scope.go:117] "RemoveContainer" containerID="509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a" Mar 12 16:56:48 crc kubenswrapper[4687]: E0312 16:56:48.750874 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a\": container with ID starting with 509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a not found: ID does not exist" containerID="509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.750913 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a"} err="failed to get container status \"509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a\": rpc error: code = NotFound desc = could not find container \"509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a\": container with ID starting with 509c6255be100e00237dab423fd663f35360e5fd59e6165fa6ba0de63a62c36a not found: ID does not exist" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.750933 4687 scope.go:117] "RemoveContainer" containerID="855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1" Mar 12 16:56:48 crc kubenswrapper[4687]: E0312 16:56:48.751211 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1\": container with ID starting with 855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1 not found: ID does not exist" containerID="855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.751247 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1"} err="failed to get container status \"855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1\": rpc error: code = NotFound desc = could not find container \"855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1\": container with ID starting with 855cc6c187810f437a94329ee7407caff717f1adc30463e559690194898f27e1 not found: ID does not exist" Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.934982 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56g8d"] Mar 12 16:56:48 crc kubenswrapper[4687]: I0312 16:56:48.947045 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-56g8d"] Mar 12 16:56:49 crc kubenswrapper[4687]: I0312 16:56:49.748116 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" path="/var/lib/kubelet/pods/4022aa7e-ecc3-4cc3-ae84-ed897cf4042c/volumes" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.931385 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7bt6r"] Mar 12 16:57:17 crc kubenswrapper[4687]: E0312 16:57:17.932473 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="registry-server" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.932488 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="registry-server" Mar 12 16:57:17 crc kubenswrapper[4687]: E0312 16:57:17.932509 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="extract-content" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.932515 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="extract-content" Mar 12 16:57:17 crc kubenswrapper[4687]: E0312 16:57:17.932535 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="extract-utilities" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.932542 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="extract-utilities" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.932951 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4022aa7e-ecc3-4cc3-ae84-ed897cf4042c" containerName="registry-server" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.935017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:17 crc kubenswrapper[4687]: I0312 16:57:17.958525 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bt6r"] Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.035312 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwxs\" (UniqueName: \"kubernetes.io/projected/362d0ce3-2708-411a-8c17-5de8598bea81-kube-api-access-2fwxs\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.035731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-catalog-content\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.035761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-utilities\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.138484 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwxs\" (UniqueName: \"kubernetes.io/projected/362d0ce3-2708-411a-8c17-5de8598bea81-kube-api-access-2fwxs\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.138586 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-catalog-content\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.138612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-utilities\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.139273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-utilities\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.140503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-catalog-content\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.161030 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwxs\" (UniqueName: \"kubernetes.io/projected/362d0ce3-2708-411a-8c17-5de8598bea81-kube-api-access-2fwxs\") pod \"redhat-operators-7bt6r\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.255724 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.850064 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bt6r"] Mar 12 16:57:18 crc kubenswrapper[4687]: I0312 16:57:18.980285 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerStarted","Data":"bf6a4ef72e7a92b04a952d9cd8e3afab4dceba486199f3a41a76866464b99741"} Mar 12 16:57:19 crc kubenswrapper[4687]: I0312 16:57:19.992488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerDied","Data":"7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77"} Mar 12 16:57:19 crc kubenswrapper[4687]: I0312 16:57:19.992349 4687 generic.go:334] "Generic (PLEG): container finished" podID="362d0ce3-2708-411a-8c17-5de8598bea81" containerID="7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77" exitCode=0 Mar 12 16:57:21 crc kubenswrapper[4687]: I0312 16:57:21.003957 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerStarted","Data":"f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613"} Mar 12 16:57:27 crc kubenswrapper[4687]: I0312 16:57:27.073280 4687 generic.go:334] "Generic (PLEG): container finished" podID="362d0ce3-2708-411a-8c17-5de8598bea81" containerID="f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613" exitCode=0 Mar 12 16:57:27 crc kubenswrapper[4687]: I0312 16:57:27.073331 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerDied","Data":"f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613"} Mar 12 16:57:28 crc kubenswrapper[4687]: I0312 16:57:28.087466 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerStarted","Data":"4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30"} Mar 12 16:57:28 crc kubenswrapper[4687]: I0312 16:57:28.113726 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7bt6r" podStartSLOduration=3.577693198 podStartE2EDuration="11.113705957s" podCreationTimestamp="2026-03-12 16:57:17 +0000 UTC" firstStartedPulling="2026-03-12 16:57:19.994982291 +0000 UTC m=+3288.958944645" lastFinishedPulling="2026-03-12 16:57:27.53099506 +0000 UTC m=+3296.494957404" observedRunningTime="2026-03-12 16:57:28.10611934 +0000 UTC m=+3297.070081684" watchObservedRunningTime="2026-03-12 16:57:28.113705957 +0000 UTC m=+3297.077668291" Mar 12 16:57:28 crc kubenswrapper[4687]: I0312 16:57:28.255960 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:28 crc kubenswrapper[4687]: I0312 16:57:28.256003 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:29 crc kubenswrapper[4687]: I0312 16:57:29.302696 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bt6r" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="registry-server" probeResult="failure" output=< Mar 12 16:57:29 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:57:29 crc kubenswrapper[4687]: > Mar 12 16:57:39 crc kubenswrapper[4687]: I0312 16:57:39.357794 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bt6r" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="registry-server" probeResult="failure" output=< Mar 12 16:57:39 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 16:57:39 crc kubenswrapper[4687]: > Mar 12 16:57:48 crc kubenswrapper[4687]: I0312 16:57:48.317726 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:48 crc kubenswrapper[4687]: I0312 16:57:48.376273 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:49 crc kubenswrapper[4687]: I0312 16:57:49.140954 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bt6r"] Mar 12 16:57:49 crc kubenswrapper[4687]: I0312 16:57:49.383779 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7bt6r" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="registry-server" containerID="cri-o://4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30" gracePeriod=2 Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.047344 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.116853 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fwxs\" (UniqueName: \"kubernetes.io/projected/362d0ce3-2708-411a-8c17-5de8598bea81-kube-api-access-2fwxs\") pod \"362d0ce3-2708-411a-8c17-5de8598bea81\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.117246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-catalog-content\") pod \"362d0ce3-2708-411a-8c17-5de8598bea81\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.117383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-utilities\") pod \"362d0ce3-2708-411a-8c17-5de8598bea81\" (UID: \"362d0ce3-2708-411a-8c17-5de8598bea81\") " Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.118536 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-utilities" (OuterVolumeSpecName: "utilities") pod "362d0ce3-2708-411a-8c17-5de8598bea81" (UID: "362d0ce3-2708-411a-8c17-5de8598bea81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.124210 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362d0ce3-2708-411a-8c17-5de8598bea81-kube-api-access-2fwxs" (OuterVolumeSpecName: "kube-api-access-2fwxs") pod "362d0ce3-2708-411a-8c17-5de8598bea81" (UID: "362d0ce3-2708-411a-8c17-5de8598bea81"). InnerVolumeSpecName "kube-api-access-2fwxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.221249 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.221283 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fwxs\" (UniqueName: \"kubernetes.io/projected/362d0ce3-2708-411a-8c17-5de8598bea81-kube-api-access-2fwxs\") on node \"crc\" DevicePath \"\"" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.272492 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "362d0ce3-2708-411a-8c17-5de8598bea81" (UID: "362d0ce3-2708-411a-8c17-5de8598bea81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.323504 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362d0ce3-2708-411a-8c17-5de8598bea81-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.400397 4687 generic.go:334] "Generic (PLEG): container finished" podID="362d0ce3-2708-411a-8c17-5de8598bea81" containerID="4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30" exitCode=0 Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.400456 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bt6r" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.400489 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerDied","Data":"4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30"} Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.401654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bt6r" event={"ID":"362d0ce3-2708-411a-8c17-5de8598bea81","Type":"ContainerDied","Data":"bf6a4ef72e7a92b04a952d9cd8e3afab4dceba486199f3a41a76866464b99741"} Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.401675 4687 scope.go:117] "RemoveContainer" containerID="4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.445210 4687 scope.go:117] "RemoveContainer" containerID="f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.450899 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bt6r"] Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.461653 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7bt6r"] Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.487484 4687 scope.go:117] "RemoveContainer" containerID="7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.528546 4687 scope.go:117] "RemoveContainer" containerID="4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30" Mar 12 16:57:50 crc kubenswrapper[4687]: E0312 16:57:50.528997 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30\": container with ID starting with 4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30 not found: ID does not exist" containerID="4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.529044 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30"} err="failed to get container status \"4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30\": rpc error: code = NotFound desc = could not find container \"4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30\": container with ID starting with 4d828d9306a0e75670dca5a39bda7dd4906123a0df05372fa508a9687094be30 not found: ID does not exist" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.529071 4687 scope.go:117] "RemoveContainer" containerID="f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613" Mar 12 16:57:50 crc kubenswrapper[4687]: E0312 16:57:50.529884 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613\": container with ID starting with f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613 not found: ID does not exist" containerID="f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.530026 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613"} err="failed to get container status \"f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613\": rpc error: code = NotFound desc = could not find container \"f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613\": container with ID starting with f3e0c98744e1e1ca640e29726909c2f8c49c0cac2064426b8137a252811e0613 not found: ID does not exist" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.530137 4687 scope.go:117] "RemoveContainer" containerID="7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77" Mar 12 16:57:50 crc kubenswrapper[4687]: E0312 16:57:50.530551 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77\": container with ID starting with 7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77 not found: ID does not exist" containerID="7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77" Mar 12 16:57:50 crc kubenswrapper[4687]: I0312 16:57:50.530584 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77"} err="failed to get container status \"7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77\": rpc error: code = NotFound desc = could not find container \"7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77\": container with ID starting with 7738dc5ea89dd731c1c0d501cf1aaa373efe54097de9febe107711aa16ae2d77 not found: ID does not exist" Mar 12 16:57:51 crc kubenswrapper[4687]: I0312 16:57:51.747844 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" path="/var/lib/kubelet/pods/362d0ce3-2708-411a-8c17-5de8598bea81/volumes" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.151191 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555578-s6x2k"] Mar 12 16:58:00 crc kubenswrapper[4687]: E0312 16:58:00.152230 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="registry-server" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.152246 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="registry-server" Mar 12 16:58:00 crc kubenswrapper[4687]: E0312 16:58:00.152274 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="extract-content" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.152280 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="extract-content" Mar 12 16:58:00 crc kubenswrapper[4687]: E0312 16:58:00.152289 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="extract-utilities" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.152296 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="extract-utilities" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.152540 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="362d0ce3-2708-411a-8c17-5de8598bea81" containerName="registry-server" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.153522 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.155684 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.155690 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.157760 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.162075 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555578-s6x2k"] Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.276619 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5j5\" (UniqueName: \"kubernetes.io/projected/65f3bb66-51b6-4f3a-a734-438261e75158-kube-api-access-wl5j5\") pod \"auto-csr-approver-29555578-s6x2k\" (UID: \"65f3bb66-51b6-4f3a-a734-438261e75158\") " pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.379855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5j5\" (UniqueName: \"kubernetes.io/projected/65f3bb66-51b6-4f3a-a734-438261e75158-kube-api-access-wl5j5\") pod \"auto-csr-approver-29555578-s6x2k\" (UID: \"65f3bb66-51b6-4f3a-a734-438261e75158\") " pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.398328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5j5\" (UniqueName: \"kubernetes.io/projected/65f3bb66-51b6-4f3a-a734-438261e75158-kube-api-access-wl5j5\") pod \"auto-csr-approver-29555578-s6x2k\" (UID: \"65f3bb66-51b6-4f3a-a734-438261e75158\") " pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:00 crc kubenswrapper[4687]: I0312 16:58:00.475047 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:01 crc kubenswrapper[4687]: I0312 16:58:01.022038 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555578-s6x2k"] Mar 12 16:58:01 crc kubenswrapper[4687]: I0312 16:58:01.522174 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" event={"ID":"65f3bb66-51b6-4f3a-a734-438261e75158","Type":"ContainerStarted","Data":"76da21446a6c0c3c248652e1588134d9df9832a92e2328c449954af4d0263a46"} Mar 12 16:58:02 crc kubenswrapper[4687]: I0312 16:58:02.533551 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" event={"ID":"65f3bb66-51b6-4f3a-a734-438261e75158","Type":"ContainerStarted","Data":"9d65ddd9534ed8d1c427f0f7f2416ed92d474cf322c35d8b7705f361305b7173"} Mar 12 16:58:02 crc kubenswrapper[4687]: I0312 16:58:02.553254 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" podStartSLOduration=1.5238075580000001 podStartE2EDuration="2.553233397s" podCreationTimestamp="2026-03-12 16:58:00 +0000 UTC" firstStartedPulling="2026-03-12 16:58:01.031334188 +0000 UTC m=+3329.995296542" lastFinishedPulling="2026-03-12 16:58:02.060760037 +0000 UTC m=+3331.024722381" observedRunningTime="2026-03-12 16:58:02.548696894 +0000 UTC m=+3331.512659228" watchObservedRunningTime="2026-03-12 16:58:02.553233397 +0000 UTC m=+3331.517195741" Mar 12 16:58:04 crc kubenswrapper[4687]: I0312 16:58:04.554797 4687 generic.go:334] "Generic (PLEG): container finished" podID="65f3bb66-51b6-4f3a-a734-438261e75158" containerID="9d65ddd9534ed8d1c427f0f7f2416ed92d474cf322c35d8b7705f361305b7173" exitCode=0 Mar 12 16:58:04 crc kubenswrapper[4687]: I0312 16:58:04.554877 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" event={"ID":"65f3bb66-51b6-4f3a-a734-438261e75158","Type":"ContainerDied","Data":"9d65ddd9534ed8d1c427f0f7f2416ed92d474cf322c35d8b7705f361305b7173"} Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.028896 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.207098 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl5j5\" (UniqueName: \"kubernetes.io/projected/65f3bb66-51b6-4f3a-a734-438261e75158-kube-api-access-wl5j5\") pod \"65f3bb66-51b6-4f3a-a734-438261e75158\" (UID: \"65f3bb66-51b6-4f3a-a734-438261e75158\") " Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.214631 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f3bb66-51b6-4f3a-a734-438261e75158-kube-api-access-wl5j5" (OuterVolumeSpecName: "kube-api-access-wl5j5") pod "65f3bb66-51b6-4f3a-a734-438261e75158" (UID: "65f3bb66-51b6-4f3a-a734-438261e75158"). InnerVolumeSpecName "kube-api-access-wl5j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.310822 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl5j5\" (UniqueName: \"kubernetes.io/projected/65f3bb66-51b6-4f3a-a734-438261e75158-kube-api-access-wl5j5\") on node \"crc\" DevicePath \"\"" Mar 12 16:58:06 crc kubenswrapper[4687]: E0312 16:58:06.350940 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:35820->38.102.83.38:46595: write tcp 38.102.83.38:35820->38.102.83.38:46595: write: broken pipe Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.578230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" event={"ID":"65f3bb66-51b6-4f3a-a734-438261e75158","Type":"ContainerDied","Data":"76da21446a6c0c3c248652e1588134d9df9832a92e2328c449954af4d0263a46"} Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.578271 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76da21446a6c0c3c248652e1588134d9df9832a92e2328c449954af4d0263a46" Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.578312 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555578-s6x2k" Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.657946 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555572-n72kt"] Mar 12 16:58:06 crc kubenswrapper[4687]: I0312 16:58:06.669428 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555572-n72kt"] Mar 12 16:58:07 crc kubenswrapper[4687]: I0312 16:58:07.747139 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bf66a23-2621-47b0-b006-0d9b0f21b0a5" path="/var/lib/kubelet/pods/2bf66a23-2621-47b0-b006-0d9b0f21b0a5/volumes" Mar 12 16:58:35 crc kubenswrapper[4687]: I0312 16:58:35.068797 4687 scope.go:117] "RemoveContainer" containerID="1a374b06560de530d622e48c9db58ef1a8feedf49e3f0b0236d4d4fd391f5a30" Mar 12 16:59:14 crc kubenswrapper[4687]: I0312 16:59:14.121796 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:59:14 crc kubenswrapper[4687]: I0312 16:59:14.122305 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 16:59:44 crc kubenswrapper[4687]: I0312 16:59:44.121784 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 16:59:44 crc kubenswrapper[4687]: I0312 16:59:44.122326 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.156666 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555580-77nr7"] Mar 12 17:00:00 crc kubenswrapper[4687]: E0312 17:00:00.158146 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f3bb66-51b6-4f3a-a734-438261e75158" containerName="oc" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.158170 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f3bb66-51b6-4f3a-a734-438261e75158" containerName="oc" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.158622 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f3bb66-51b6-4f3a-a734-438261e75158" containerName="oc" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.160009 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.162828 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.163607 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.164088 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.168847 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft"] Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.171515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.173967 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.175387 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.180890 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555580-77nr7"] Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.193778 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft"] Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.215752 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv6v7\" (UniqueName: \"kubernetes.io/projected/988076ed-736a-4a8f-b3a6-fef130be728f-kube-api-access-hv6v7\") pod \"auto-csr-approver-29555580-77nr7\" (UID: \"988076ed-736a-4a8f-b3a6-fef130be728f\") " pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.215918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kchjb\" (UniqueName: \"kubernetes.io/projected/0db0bb9f-b026-48cc-abd8-4e63489c3053-kube-api-access-kchjb\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.215957 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0db0bb9f-b026-48cc-abd8-4e63489c3053-config-volume\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.216018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0db0bb9f-b026-48cc-abd8-4e63489c3053-secret-volume\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.318331 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kchjb\" (UniqueName: \"kubernetes.io/projected/0db0bb9f-b026-48cc-abd8-4e63489c3053-kube-api-access-kchjb\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.318432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0db0bb9f-b026-48cc-abd8-4e63489c3053-config-volume\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.318700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0db0bb9f-b026-48cc-abd8-4e63489c3053-secret-volume\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.318976 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv6v7\" (UniqueName: \"kubernetes.io/projected/988076ed-736a-4a8f-b3a6-fef130be728f-kube-api-access-hv6v7\") pod \"auto-csr-approver-29555580-77nr7\" (UID: \"988076ed-736a-4a8f-b3a6-fef130be728f\") " pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.319466 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0db0bb9f-b026-48cc-abd8-4e63489c3053-config-volume\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.340235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0db0bb9f-b026-48cc-abd8-4e63489c3053-secret-volume\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.352152 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv6v7\" (UniqueName: \"kubernetes.io/projected/988076ed-736a-4a8f-b3a6-fef130be728f-kube-api-access-hv6v7\") pod \"auto-csr-approver-29555580-77nr7\" (UID: \"988076ed-736a-4a8f-b3a6-fef130be728f\") " pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.356161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kchjb\" (UniqueName: \"kubernetes.io/projected/0db0bb9f-b026-48cc-abd8-4e63489c3053-kube-api-access-kchjb\") pod \"collect-profiles-29555580-zglft\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.492740 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:00 crc kubenswrapper[4687]: I0312 17:00:00.511985 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:01 crc kubenswrapper[4687]: I0312 17:00:01.055223 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555580-77nr7"] Mar 12 17:00:01 crc kubenswrapper[4687]: I0312 17:00:01.149321 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft"] Mar 12 17:00:01 crc kubenswrapper[4687]: I0312 17:00:01.853212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555580-77nr7" event={"ID":"988076ed-736a-4a8f-b3a6-fef130be728f","Type":"ContainerStarted","Data":"ff17544e9c02c2a9346be7580b8c71a02ef7a628b70b17e772bf378b82d2df8d"} Mar 12 17:00:01 crc kubenswrapper[4687]: I0312 17:00:01.854871 4687 generic.go:334] "Generic (PLEG): container finished" podID="0db0bb9f-b026-48cc-abd8-4e63489c3053" containerID="8f8d29c0664f5d0b3ea0e10d7ac79136a907cb54f15314939fbc17f661122f82" exitCode=0 Mar 12 17:00:01 crc kubenswrapper[4687]: I0312 17:00:01.854931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" event={"ID":"0db0bb9f-b026-48cc-abd8-4e63489c3053","Type":"ContainerDied","Data":"8f8d29c0664f5d0b3ea0e10d7ac79136a907cb54f15314939fbc17f661122f82"} Mar 12 17:00:01 crc kubenswrapper[4687]: I0312 17:00:01.854965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" event={"ID":"0db0bb9f-b026-48cc-abd8-4e63489c3053","Type":"ContainerStarted","Data":"6aa6adb03f065dfb96a61d3ae24378c1d089234273e776928babcfbc827f79e3"} Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.291174 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.406012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0db0bb9f-b026-48cc-abd8-4e63489c3053-secret-volume\") pod \"0db0bb9f-b026-48cc-abd8-4e63489c3053\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.406066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0db0bb9f-b026-48cc-abd8-4e63489c3053-config-volume\") pod \"0db0bb9f-b026-48cc-abd8-4e63489c3053\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.406129 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kchjb\" (UniqueName: \"kubernetes.io/projected/0db0bb9f-b026-48cc-abd8-4e63489c3053-kube-api-access-kchjb\") pod \"0db0bb9f-b026-48cc-abd8-4e63489c3053\" (UID: \"0db0bb9f-b026-48cc-abd8-4e63489c3053\") " Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.407312 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db0bb9f-b026-48cc-abd8-4e63489c3053-config-volume" (OuterVolumeSpecName: "config-volume") pod "0db0bb9f-b026-48cc-abd8-4e63489c3053" (UID: "0db0bb9f-b026-48cc-abd8-4e63489c3053"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.413767 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db0bb9f-b026-48cc-abd8-4e63489c3053-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0db0bb9f-b026-48cc-abd8-4e63489c3053" (UID: "0db0bb9f-b026-48cc-abd8-4e63489c3053"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.417105 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db0bb9f-b026-48cc-abd8-4e63489c3053-kube-api-access-kchjb" (OuterVolumeSpecName: "kube-api-access-kchjb") pod "0db0bb9f-b026-48cc-abd8-4e63489c3053" (UID: "0db0bb9f-b026-48cc-abd8-4e63489c3053"). InnerVolumeSpecName "kube-api-access-kchjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.509502 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kchjb\" (UniqueName: \"kubernetes.io/projected/0db0bb9f-b026-48cc-abd8-4e63489c3053-kube-api-access-kchjb\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.509539 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0db0bb9f-b026-48cc-abd8-4e63489c3053-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.509550 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0db0bb9f-b026-48cc-abd8-4e63489c3053-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.878881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555580-77nr7" event={"ID":"988076ed-736a-4a8f-b3a6-fef130be728f","Type":"ContainerStarted","Data":"38163a90608dcbed85a16b3de83f08705e1aa5792559def6e958ef9d4cf29aa3"} Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.882027 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" event={"ID":"0db0bb9f-b026-48cc-abd8-4e63489c3053","Type":"ContainerDied","Data":"6aa6adb03f065dfb96a61d3ae24378c1d089234273e776928babcfbc827f79e3"} Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.882064 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa6adb03f065dfb96a61d3ae24378c1d089234273e776928babcfbc827f79e3" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.882129 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555580-zglft" Mar 12 17:00:03 crc kubenswrapper[4687]: I0312 17:00:03.901114 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555580-77nr7" podStartSLOduration=1.4237543129999999 podStartE2EDuration="3.901093665s" podCreationTimestamp="2026-03-12 17:00:00 +0000 UTC" firstStartedPulling="2026-03-12 17:00:01.077948304 +0000 UTC m=+3450.041910648" lastFinishedPulling="2026-03-12 17:00:03.555287656 +0000 UTC m=+3452.519250000" observedRunningTime="2026-03-12 17:00:03.895898773 +0000 UTC m=+3452.859861117" watchObservedRunningTime="2026-03-12 17:00:03.901093665 +0000 UTC m=+3452.865056009" Mar 12 17:00:04 crc kubenswrapper[4687]: I0312 17:00:04.376690 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j"] Mar 12 17:00:04 crc kubenswrapper[4687]: I0312 17:00:04.388722 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555535-5nw4j"] Mar 12 17:00:04 crc kubenswrapper[4687]: I0312 17:00:04.900065 4687 generic.go:334] "Generic (PLEG): container finished" podID="988076ed-736a-4a8f-b3a6-fef130be728f" containerID="38163a90608dcbed85a16b3de83f08705e1aa5792559def6e958ef9d4cf29aa3" exitCode=0 Mar 12 17:00:04 crc kubenswrapper[4687]: I0312 17:00:04.900106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555580-77nr7" event={"ID":"988076ed-736a-4a8f-b3a6-fef130be728f","Type":"ContainerDied","Data":"38163a90608dcbed85a16b3de83f08705e1aa5792559def6e958ef9d4cf29aa3"} Mar 12 17:00:05 crc kubenswrapper[4687]: I0312 17:00:05.748300 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9c53d5-0b9e-4f99-84d8-edfcea4675aa" path="/var/lib/kubelet/pods/6c9c53d5-0b9e-4f99-84d8-edfcea4675aa/volumes" Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.313896 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.387069 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv6v7\" (UniqueName: \"kubernetes.io/projected/988076ed-736a-4a8f-b3a6-fef130be728f-kube-api-access-hv6v7\") pod \"988076ed-736a-4a8f-b3a6-fef130be728f\" (UID: \"988076ed-736a-4a8f-b3a6-fef130be728f\") " Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.393238 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988076ed-736a-4a8f-b3a6-fef130be728f-kube-api-access-hv6v7" (OuterVolumeSpecName: "kube-api-access-hv6v7") pod "988076ed-736a-4a8f-b3a6-fef130be728f" (UID: "988076ed-736a-4a8f-b3a6-fef130be728f"). InnerVolumeSpecName "kube-api-access-hv6v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.490577 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv6v7\" (UniqueName: \"kubernetes.io/projected/988076ed-736a-4a8f-b3a6-fef130be728f-kube-api-access-hv6v7\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.926507 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555580-77nr7" Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.926515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555580-77nr7" event={"ID":"988076ed-736a-4a8f-b3a6-fef130be728f","Type":"ContainerDied","Data":"ff17544e9c02c2a9346be7580b8c71a02ef7a628b70b17e772bf378b82d2df8d"} Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.928205 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff17544e9c02c2a9346be7580b8c71a02ef7a628b70b17e772bf378b82d2df8d" Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.962938 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555574-p4qbn"] Mar 12 17:00:06 crc kubenswrapper[4687]: I0312 17:00:06.973577 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555574-p4qbn"] Mar 12 17:00:07 crc kubenswrapper[4687]: I0312 17:00:07.747173 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c16aa9-7193-437a-9ce1-bffc3e52887f" path="/var/lib/kubelet/pods/e1c16aa9-7193-437a-9ce1-bffc3e52887f/volumes" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.122221 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.122763 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.122811 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.123660 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9433878e3a9622ce6db53fae0bb4d8ab03ad8236b3772b172987a71ca458659e"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.123708 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://9433878e3a9622ce6db53fae0bb4d8ab03ad8236b3772b172987a71ca458659e" gracePeriod=600 Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.703856 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzvbk"] Mar 12 17:00:14 crc kubenswrapper[4687]: E0312 17:00:14.704807 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db0bb9f-b026-48cc-abd8-4e63489c3053" containerName="collect-profiles" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.704834 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db0bb9f-b026-48cc-abd8-4e63489c3053" containerName="collect-profiles" Mar 12 17:00:14 crc kubenswrapper[4687]: E0312 17:00:14.704862 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988076ed-736a-4a8f-b3a6-fef130be728f" containerName="oc" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.704871 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="988076ed-736a-4a8f-b3a6-fef130be728f" containerName="oc" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.705178 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="988076ed-736a-4a8f-b3a6-fef130be728f" containerName="oc" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.705224 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db0bb9f-b026-48cc-abd8-4e63489c3053" containerName="collect-profiles" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.707326 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.714646 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzvbk"] Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.807921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-utilities\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.808112 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-catalog-content\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.808248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnh5b\" (UniqueName: \"kubernetes.io/projected/6399ce2b-65c3-4723-ba02-50aad9aedd67-kube-api-access-hnh5b\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.911320 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnh5b\" (UniqueName: \"kubernetes.io/projected/6399ce2b-65c3-4723-ba02-50aad9aedd67-kube-api-access-hnh5b\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.912498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-utilities\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.912960 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-catalog-content\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.912977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-utilities\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.913320 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-catalog-content\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:14 crc kubenswrapper[4687]: I0312 17:00:14.937104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnh5b\" (UniqueName: \"kubernetes.io/projected/6399ce2b-65c3-4723-ba02-50aad9aedd67-kube-api-access-hnh5b\") pod \"community-operators-jzvbk\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:15 crc kubenswrapper[4687]: I0312 17:00:15.023738 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="9433878e3a9622ce6db53fae0bb4d8ab03ad8236b3772b172987a71ca458659e" exitCode=0 Mar 12 17:00:15 crc kubenswrapper[4687]: I0312 17:00:15.023798 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"9433878e3a9622ce6db53fae0bb4d8ab03ad8236b3772b172987a71ca458659e"} Mar 12 17:00:15 crc kubenswrapper[4687]: I0312 17:00:15.023836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b"} Mar 12 17:00:15 crc kubenswrapper[4687]: I0312 17:00:15.023857 4687 scope.go:117] "RemoveContainer" containerID="4d84881d29605e90e3fb79c98c1856442b33601c7fd7f76810bd1f4a1d90bb65" Mar 12 17:00:15 crc kubenswrapper[4687]: I0312 17:00:15.028473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:15 crc kubenswrapper[4687]: I0312 17:00:15.653653 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzvbk"] Mar 12 17:00:16 crc kubenswrapper[4687]: I0312 17:00:16.041401 4687 generic.go:334] "Generic (PLEG): container finished" podID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerID="dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec" exitCode=0 Mar 12 17:00:16 crc kubenswrapper[4687]: I0312 17:00:16.041437 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerDied","Data":"dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec"} Mar 12 17:00:16 crc kubenswrapper[4687]: I0312 17:00:16.041459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerStarted","Data":"9f5d7ac8c3fb59a61f1712243eb1cdacf3e7ba6b3ed655e35fe933066546e64e"} Mar 12 17:00:17 crc kubenswrapper[4687]: I0312 17:00:17.051128 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerStarted","Data":"e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73"} Mar 12 17:00:21 crc kubenswrapper[4687]: I0312 17:00:21.096917 4687 generic.go:334] "Generic (PLEG): container finished" podID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerID="e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73" exitCode=0 Mar 12 17:00:21 crc kubenswrapper[4687]: I0312 17:00:21.097015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerDied","Data":"e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73"} Mar 12 17:00:24 crc kubenswrapper[4687]: I0312 17:00:24.148601 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerStarted","Data":"5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28"} Mar 12 17:00:24 crc kubenswrapper[4687]: I0312 17:00:24.182115 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzvbk" podStartSLOduration=4.20926311 podStartE2EDuration="10.182092424s" podCreationTimestamp="2026-03-12 17:00:14 +0000 UTC" firstStartedPulling="2026-03-12 17:00:16.043985992 +0000 UTC m=+3465.007948336" lastFinishedPulling="2026-03-12 17:00:22.016815306 +0000 UTC m=+3470.980777650" observedRunningTime="2026-03-12 17:00:24.176133841 +0000 UTC m=+3473.140096195" watchObservedRunningTime="2026-03-12 17:00:24.182092424 +0000 UTC m=+3473.146054788" Mar 12 17:00:25 crc kubenswrapper[4687]: I0312 17:00:25.029607 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:25 crc kubenswrapper[4687]: I0312 17:00:25.029906 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:26 crc kubenswrapper[4687]: I0312 17:00:26.077258 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jzvbk" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="registry-server" probeResult="failure" output=< Mar 12 17:00:26 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:00:26 crc kubenswrapper[4687]: > Mar 12 17:00:35 crc kubenswrapper[4687]: I0312 17:00:35.082672 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:35 crc kubenswrapper[4687]: I0312 17:00:35.143438 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:35 crc kubenswrapper[4687]: I0312 17:00:35.241665 4687 scope.go:117] "RemoveContainer" containerID="347eacd1677f6a7eb33ab5c5d07f82b9cbeed62bc73c90d8e79dde904122e552" Mar 12 17:00:35 crc kubenswrapper[4687]: I0312 17:00:35.297754 4687 scope.go:117] "RemoveContainer" containerID="820440dbfa46ae90f64c8efbb408d225c5722bf3dfaf1a2e88180e1b34e6ea02" Mar 12 17:00:35 crc kubenswrapper[4687]: I0312 17:00:35.338312 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzvbk"] Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.349058 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzvbk" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="registry-server" containerID="cri-o://5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28" gracePeriod=2 Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.907486 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.990624 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-utilities\") pod \"6399ce2b-65c3-4723-ba02-50aad9aedd67\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.990863 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-catalog-content\") pod \"6399ce2b-65c3-4723-ba02-50aad9aedd67\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.990892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnh5b\" (UniqueName: \"kubernetes.io/projected/6399ce2b-65c3-4723-ba02-50aad9aedd67-kube-api-access-hnh5b\") pod \"6399ce2b-65c3-4723-ba02-50aad9aedd67\" (UID: \"6399ce2b-65c3-4723-ba02-50aad9aedd67\") " Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.991414 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-utilities" (OuterVolumeSpecName: "utilities") pod "6399ce2b-65c3-4723-ba02-50aad9aedd67" (UID: "6399ce2b-65c3-4723-ba02-50aad9aedd67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.992160 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:36 crc kubenswrapper[4687]: I0312 17:00:36.999645 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6399ce2b-65c3-4723-ba02-50aad9aedd67-kube-api-access-hnh5b" (OuterVolumeSpecName: "kube-api-access-hnh5b") pod "6399ce2b-65c3-4723-ba02-50aad9aedd67" (UID: "6399ce2b-65c3-4723-ba02-50aad9aedd67"). InnerVolumeSpecName "kube-api-access-hnh5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.044253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6399ce2b-65c3-4723-ba02-50aad9aedd67" (UID: "6399ce2b-65c3-4723-ba02-50aad9aedd67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.094691 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6399ce2b-65c3-4723-ba02-50aad9aedd67-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.094727 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnh5b\" (UniqueName: \"kubernetes.io/projected/6399ce2b-65c3-4723-ba02-50aad9aedd67-kube-api-access-hnh5b\") on node \"crc\" DevicePath \"\"" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.361991 4687 generic.go:334] "Generic (PLEG): container finished" podID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerID="5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28" exitCode=0 Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.362035 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzvbk" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.362038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerDied","Data":"5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28"} Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.362076 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzvbk" event={"ID":"6399ce2b-65c3-4723-ba02-50aad9aedd67","Type":"ContainerDied","Data":"9f5d7ac8c3fb59a61f1712243eb1cdacf3e7ba6b3ed655e35fe933066546e64e"} Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.362096 4687 scope.go:117] "RemoveContainer" containerID="5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.405827 4687 scope.go:117] "RemoveContainer" containerID="e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.419139 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzvbk"] Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.439685 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzvbk"] Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.451533 4687 scope.go:117] "RemoveContainer" containerID="dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.486753 4687 scope.go:117] "RemoveContainer" containerID="5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28" Mar 12 17:00:37 crc kubenswrapper[4687]: E0312 17:00:37.487126 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28\": container with ID starting with 5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28 not found: ID does not exist" containerID="5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.487170 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28"} err="failed to get container status \"5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28\": rpc error: code = NotFound desc = could not find container \"5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28\": container with ID starting with 5d7d4b11a2dad8d2b803ca49c51b48073b5c53337988b75b952e740ce71cfc28 not found: ID does not exist" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.487197 4687 scope.go:117] "RemoveContainer" containerID="e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73" Mar 12 17:00:37 crc kubenswrapper[4687]: E0312 17:00:37.487747 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73\": container with ID starting with e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73 not found: ID does not exist" containerID="e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.487800 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73"} err="failed to get container status \"e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73\": rpc error: code = NotFound desc = could not find container \"e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73\": container with ID starting with e59b0fa0293b37c15a4c84b6c54bef15438609a2fb6084c05ade03074ddcab73 not found: ID does not exist" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.487834 4687 scope.go:117] "RemoveContainer" containerID="dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec" Mar 12 17:00:37 crc kubenswrapper[4687]: E0312 17:00:37.488248 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec\": container with ID starting with dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec not found: ID does not exist" containerID="dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.488272 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec"} err="failed to get container status \"dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec\": rpc error: code = NotFound desc = could not find container \"dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec\": container with ID starting with dd9e6738caf9d79267651f19c4084020863a1904329fb1bcf188b8fd38cc36ec not found: ID does not exist" Mar 12 17:00:37 crc kubenswrapper[4687]: I0312 17:00:37.747864 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" path="/var/lib/kubelet/pods/6399ce2b-65c3-4723-ba02-50aad9aedd67/volumes" Mar 12 17:00:43 crc kubenswrapper[4687]: E0312 17:00:43.830041 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:50730->38.102.83.38:46595: write tcp 38.102.83.38:50730->38.102.83.38:46595: write: connection reset by peer Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.154110 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29555581-pd64h"] Mar 12 17:01:00 crc kubenswrapper[4687]: E0312 17:01:00.155316 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="registry-server" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.155332 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="registry-server" Mar 12 17:01:00 crc kubenswrapper[4687]: E0312 17:01:00.155407 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="extract-utilities" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.155414 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="extract-utilities" Mar 12 17:01:00 crc kubenswrapper[4687]: E0312 17:01:00.155425 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="extract-content" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.155431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="extract-content" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.155625 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6399ce2b-65c3-4723-ba02-50aad9aedd67" containerName="registry-server" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.156482 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.168773 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555581-pd64h"] Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.288093 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-combined-ca-bundle\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.289329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-fernet-keys\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.289861 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-config-data\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.290483 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8mtl\" (UniqueName: \"kubernetes.io/projected/e93eb345-861a-4da0-a57e-93775cfb061d-kube-api-access-b8mtl\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.392547 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-combined-ca-bundle\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.393086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-fernet-keys\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.393132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-config-data\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.393331 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8mtl\" (UniqueName: \"kubernetes.io/projected/e93eb345-861a-4da0-a57e-93775cfb061d-kube-api-access-b8mtl\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.398435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-fernet-keys\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.400526 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-config-data\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.401783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-combined-ca-bundle\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.412798 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8mtl\" (UniqueName: \"kubernetes.io/projected/e93eb345-861a-4da0-a57e-93775cfb061d-kube-api-access-b8mtl\") pod \"keystone-cron-29555581-pd64h\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.487735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:00 crc kubenswrapper[4687]: I0312 17:01:00.993960 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555581-pd64h"] Mar 12 17:01:01 crc kubenswrapper[4687]: I0312 17:01:01.630291 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555581-pd64h" event={"ID":"e93eb345-861a-4da0-a57e-93775cfb061d","Type":"ContainerStarted","Data":"b4ebe169fb08f38104cee32dc81c5d13b7f2cbd138b01789cc66faa76ea9ab4b"} Mar 12 17:01:01 crc kubenswrapper[4687]: I0312 17:01:01.630654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555581-pd64h" event={"ID":"e93eb345-861a-4da0-a57e-93775cfb061d","Type":"ContainerStarted","Data":"a304e8fb21cf1ab410be45cb810b8702e1eb9f8fbd4d1dae12ca74e12add94a5"} Mar 12 17:01:01 crc kubenswrapper[4687]: I0312 17:01:01.661338 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29555581-pd64h" podStartSLOduration=1.66131881 podStartE2EDuration="1.66131881s" podCreationTimestamp="2026-03-12 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 17:01:01.646979518 +0000 UTC m=+3510.610941872" watchObservedRunningTime="2026-03-12 17:01:01.66131881 +0000 UTC m=+3510.625281154" Mar 12 17:01:07 crc kubenswrapper[4687]: I0312 17:01:07.705631 4687 generic.go:334] "Generic (PLEG): container finished" podID="e93eb345-861a-4da0-a57e-93775cfb061d" containerID="b4ebe169fb08f38104cee32dc81c5d13b7f2cbd138b01789cc66faa76ea9ab4b" exitCode=0 Mar 12 17:01:07 crc kubenswrapper[4687]: I0312 17:01:07.705828 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555581-pd64h" event={"ID":"e93eb345-861a-4da0-a57e-93775cfb061d","Type":"ContainerDied","Data":"b4ebe169fb08f38104cee32dc81c5d13b7f2cbd138b01789cc66faa76ea9ab4b"} Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.230629 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.406152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-combined-ca-bundle\") pod \"e93eb345-861a-4da0-a57e-93775cfb061d\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.406315 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8mtl\" (UniqueName: \"kubernetes.io/projected/e93eb345-861a-4da0-a57e-93775cfb061d-kube-api-access-b8mtl\") pod \"e93eb345-861a-4da0-a57e-93775cfb061d\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.406459 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-fernet-keys\") pod \"e93eb345-861a-4da0-a57e-93775cfb061d\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.406496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-config-data\") pod \"e93eb345-861a-4da0-a57e-93775cfb061d\" (UID: \"e93eb345-861a-4da0-a57e-93775cfb061d\") " Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.418757 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e93eb345-861a-4da0-a57e-93775cfb061d" (UID: "e93eb345-861a-4da0-a57e-93775cfb061d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.418800 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93eb345-861a-4da0-a57e-93775cfb061d-kube-api-access-b8mtl" (OuterVolumeSpecName: "kube-api-access-b8mtl") pod "e93eb345-861a-4da0-a57e-93775cfb061d" (UID: "e93eb345-861a-4da0-a57e-93775cfb061d"). InnerVolumeSpecName "kube-api-access-b8mtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.453479 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e93eb345-861a-4da0-a57e-93775cfb061d" (UID: "e93eb345-861a-4da0-a57e-93775cfb061d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.506822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-config-data" (OuterVolumeSpecName: "config-data") pod "e93eb345-861a-4da0-a57e-93775cfb061d" (UID: "e93eb345-861a-4da0-a57e-93775cfb061d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.509889 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.509920 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.509930 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93eb345-861a-4da0-a57e-93775cfb061d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.509942 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8mtl\" (UniqueName: \"kubernetes.io/projected/e93eb345-861a-4da0-a57e-93775cfb061d-kube-api-access-b8mtl\") on node \"crc\" DevicePath \"\"" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.728925 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555581-pd64h" event={"ID":"e93eb345-861a-4da0-a57e-93775cfb061d","Type":"ContainerDied","Data":"a304e8fb21cf1ab410be45cb810b8702e1eb9f8fbd4d1dae12ca74e12add94a5"} Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.728965 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a304e8fb21cf1ab410be45cb810b8702e1eb9f8fbd4d1dae12ca74e12add94a5" Mar 12 17:01:09 crc kubenswrapper[4687]: I0312 17:01:09.729220 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555581-pd64h" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.153184 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555582-nvh9c"] Mar 12 17:02:00 crc kubenswrapper[4687]: E0312 17:02:00.154426 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93eb345-861a-4da0-a57e-93775cfb061d" containerName="keystone-cron" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.154444 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93eb345-861a-4da0-a57e-93775cfb061d" containerName="keystone-cron" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.154732 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93eb345-861a-4da0-a57e-93775cfb061d" containerName="keystone-cron" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.155820 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.157764 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.162700 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555582-nvh9c"] Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.200061 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.200160 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.335377 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sgv7\" (UniqueName: \"kubernetes.io/projected/5d826079-5d3f-46e0-b383-9c5d06803b80-kube-api-access-5sgv7\") pod \"auto-csr-approver-29555582-nvh9c\" (UID: \"5d826079-5d3f-46e0-b383-9c5d06803b80\") " pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.437967 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sgv7\" (UniqueName: \"kubernetes.io/projected/5d826079-5d3f-46e0-b383-9c5d06803b80-kube-api-access-5sgv7\") pod \"auto-csr-approver-29555582-nvh9c\" (UID: \"5d826079-5d3f-46e0-b383-9c5d06803b80\") " pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.458578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sgv7\" (UniqueName: \"kubernetes.io/projected/5d826079-5d3f-46e0-b383-9c5d06803b80-kube-api-access-5sgv7\") pod \"auto-csr-approver-29555582-nvh9c\" (UID: \"5d826079-5d3f-46e0-b383-9c5d06803b80\") " pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:00 crc kubenswrapper[4687]: I0312 17:02:00.519580 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:01 crc kubenswrapper[4687]: I0312 17:02:01.041019 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555582-nvh9c"] Mar 12 17:02:01 crc kubenswrapper[4687]: I0312 17:02:01.041941 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 17:02:01 crc kubenswrapper[4687]: I0312 17:02:01.305813 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" event={"ID":"5d826079-5d3f-46e0-b383-9c5d06803b80","Type":"ContainerStarted","Data":"cb16f4e4dcfd54b687cf18084120e82d5138dde69a06ec5e6fdfe220ffda349d"} Mar 12 17:02:03 crc kubenswrapper[4687]: I0312 17:02:03.327860 4687 generic.go:334] "Generic (PLEG): container finished" podID="5d826079-5d3f-46e0-b383-9c5d06803b80" containerID="b5edb1124b8324e100fdeddef141c60afb93f6900a04220b133c6415943e826c" exitCode=0 Mar 12 17:02:03 crc kubenswrapper[4687]: I0312 17:02:03.327917 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" event={"ID":"5d826079-5d3f-46e0-b383-9c5d06803b80","Type":"ContainerDied","Data":"b5edb1124b8324e100fdeddef141c60afb93f6900a04220b133c6415943e826c"} Mar 12 17:02:04 crc kubenswrapper[4687]: I0312 17:02:04.770712 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:04 crc kubenswrapper[4687]: I0312 17:02:04.886262 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sgv7\" (UniqueName: \"kubernetes.io/projected/5d826079-5d3f-46e0-b383-9c5d06803b80-kube-api-access-5sgv7\") pod \"5d826079-5d3f-46e0-b383-9c5d06803b80\" (UID: \"5d826079-5d3f-46e0-b383-9c5d06803b80\") " Mar 12 17:02:04 crc kubenswrapper[4687]: I0312 17:02:04.900689 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d826079-5d3f-46e0-b383-9c5d06803b80-kube-api-access-5sgv7" (OuterVolumeSpecName: "kube-api-access-5sgv7") pod "5d826079-5d3f-46e0-b383-9c5d06803b80" (UID: "5d826079-5d3f-46e0-b383-9c5d06803b80"). InnerVolumeSpecName "kube-api-access-5sgv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:02:04 crc kubenswrapper[4687]: I0312 17:02:04.989193 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sgv7\" (UniqueName: \"kubernetes.io/projected/5d826079-5d3f-46e0-b383-9c5d06803b80-kube-api-access-5sgv7\") on node \"crc\" DevicePath \"\"" Mar 12 17:02:05 crc kubenswrapper[4687]: I0312 17:02:05.348418 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" event={"ID":"5d826079-5d3f-46e0-b383-9c5d06803b80","Type":"ContainerDied","Data":"cb16f4e4dcfd54b687cf18084120e82d5138dde69a06ec5e6fdfe220ffda349d"} Mar 12 17:02:05 crc kubenswrapper[4687]: I0312 17:02:05.348698 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb16f4e4dcfd54b687cf18084120e82d5138dde69a06ec5e6fdfe220ffda349d" Mar 12 17:02:05 crc kubenswrapper[4687]: I0312 17:02:05.348460 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555582-nvh9c" Mar 12 17:02:05 crc kubenswrapper[4687]: I0312 17:02:05.850489 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555576-hftml"] Mar 12 17:02:05 crc kubenswrapper[4687]: I0312 17:02:05.864455 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555576-hftml"] Mar 12 17:02:07 crc kubenswrapper[4687]: I0312 17:02:07.748952 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12d70c4-48a4-4c6d-8579-613fb070ec89" path="/var/lib/kubelet/pods/e12d70c4-48a4-4c6d-8579-613fb070ec89/volumes" Mar 12 17:02:14 crc kubenswrapper[4687]: I0312 17:02:14.121444 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:02:14 crc kubenswrapper[4687]: I0312 17:02:14.122070 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:02:35 crc kubenswrapper[4687]: I0312 17:02:35.422338 4687 scope.go:117] "RemoveContainer" containerID="b34424c6e068c49a887ed2309e492ef36491210f0a9382b0fb1203be797b536d" Mar 12 17:02:44 crc kubenswrapper[4687]: I0312 17:02:44.121244 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:02:44 crc kubenswrapper[4687]: I0312 17:02:44.122571 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:03:14 crc kubenswrapper[4687]: I0312 17:03:14.121201 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:03:14 crc kubenswrapper[4687]: I0312 17:03:14.121895 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:03:14 crc kubenswrapper[4687]: I0312 17:03:14.121935 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:03:14 crc kubenswrapper[4687]: I0312 17:03:14.122784 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:03:14 crc kubenswrapper[4687]: I0312 17:03:14.122828 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" gracePeriod=600 Mar 12 17:03:14 crc kubenswrapper[4687]: E0312 17:03:14.265664 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:03:15 crc kubenswrapper[4687]: I0312 17:03:15.174614 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" exitCode=0 Mar 12 17:03:15 crc kubenswrapper[4687]: I0312 17:03:15.174660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b"} Mar 12 17:03:15 crc kubenswrapper[4687]: I0312 17:03:15.174715 4687 scope.go:117] "RemoveContainer" containerID="9433878e3a9622ce6db53fae0bb4d8ab03ad8236b3772b172987a71ca458659e" Mar 12 17:03:15 crc kubenswrapper[4687]: I0312 17:03:15.175804 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:03:15 crc kubenswrapper[4687]: E0312 17:03:15.176575 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:03:27 crc kubenswrapper[4687]: I0312 17:03:27.733656 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:03:27 crc kubenswrapper[4687]: E0312 17:03:27.735424 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:03:38 crc kubenswrapper[4687]: I0312 17:03:38.734143 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:03:38 crc kubenswrapper[4687]: E0312 17:03:38.735098 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:03:49 crc kubenswrapper[4687]: I0312 17:03:49.733586 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:03:49 crc kubenswrapper[4687]: E0312 17:03:49.734710 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.142215 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555584-jlwl9"] Mar 12 17:04:00 crc kubenswrapper[4687]: E0312 17:04:00.143599 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d826079-5d3f-46e0-b383-9c5d06803b80" containerName="oc" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.143617 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d826079-5d3f-46e0-b383-9c5d06803b80" containerName="oc" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.143911 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d826079-5d3f-46e0-b383-9c5d06803b80" containerName="oc" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.144844 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.147399 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.147418 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.147399 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.161088 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555584-jlwl9"] Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.229932 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l2m\" (UniqueName: \"kubernetes.io/projected/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee-kube-api-access-42l2m\") pod \"auto-csr-approver-29555584-jlwl9\" (UID: \"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee\") " pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.332448 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l2m\" (UniqueName: \"kubernetes.io/projected/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee-kube-api-access-42l2m\") pod \"auto-csr-approver-29555584-jlwl9\" (UID: \"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee\") " pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.355050 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l2m\" (UniqueName: \"kubernetes.io/projected/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee-kube-api-access-42l2m\") pod \"auto-csr-approver-29555584-jlwl9\" (UID: \"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee\") " pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.464222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:00 crc kubenswrapper[4687]: I0312 17:04:00.941876 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555584-jlwl9"] Mar 12 17:04:01 crc kubenswrapper[4687]: I0312 17:04:01.681455 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" event={"ID":"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee","Type":"ContainerStarted","Data":"c3682a5f3cf0d7216627a941711acaa8597eddf04388c0a117ccc67c8b9eac31"} Mar 12 17:04:03 crc kubenswrapper[4687]: I0312 17:04:03.032036 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:04:03 crc kubenswrapper[4687]: E0312 17:04:03.035517 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:04:03 crc kubenswrapper[4687]: I0312 17:04:03.707729 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee" containerID="a1f3bf20e604a1123c0932fa9a564a05bb79f1d75ff042cb9ae5854151f90f86" exitCode=0 Mar 12 17:04:03 crc kubenswrapper[4687]: I0312 17:04:03.707945 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" event={"ID":"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee","Type":"ContainerDied","Data":"a1f3bf20e604a1123c0932fa9a564a05bb79f1d75ff042cb9ae5854151f90f86"} Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.135578 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.295112 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l2m\" (UniqueName: \"kubernetes.io/projected/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee-kube-api-access-42l2m\") pod \"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee\" (UID: \"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee\") " Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.305740 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee-kube-api-access-42l2m" (OuterVolumeSpecName: "kube-api-access-42l2m") pod "6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee" (UID: "6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee"). InnerVolumeSpecName "kube-api-access-42l2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.398815 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42l2m\" (UniqueName: \"kubernetes.io/projected/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee-kube-api-access-42l2m\") on node \"crc\" DevicePath \"\"" Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.728629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" event={"ID":"6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee","Type":"ContainerDied","Data":"c3682a5f3cf0d7216627a941711acaa8597eddf04388c0a117ccc67c8b9eac31"} Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.728672 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3682a5f3cf0d7216627a941711acaa8597eddf04388c0a117ccc67c8b9eac31" Mar 12 17:04:05 crc kubenswrapper[4687]: I0312 17:04:05.728992 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555584-jlwl9" Mar 12 17:04:06 crc kubenswrapper[4687]: I0312 17:04:06.224638 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555578-s6x2k"] Mar 12 17:04:06 crc kubenswrapper[4687]: I0312 17:04:06.237294 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555578-s6x2k"] Mar 12 17:04:07 crc kubenswrapper[4687]: I0312 17:04:07.748992 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f3bb66-51b6-4f3a-a734-438261e75158" path="/var/lib/kubelet/pods/65f3bb66-51b6-4f3a-a734-438261e75158/volumes" Mar 12 17:04:17 crc kubenswrapper[4687]: I0312 17:04:17.733127 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:04:17 crc kubenswrapper[4687]: E0312 17:04:17.733857 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:04:32 crc kubenswrapper[4687]: I0312 17:04:32.733166 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:04:32 crc kubenswrapper[4687]: E0312 17:04:32.734301 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:04:35 crc kubenswrapper[4687]: I0312 17:04:35.566471 4687 scope.go:117] "RemoveContainer" containerID="9d65ddd9534ed8d1c427f0f7f2416ed92d474cf322c35d8b7705f361305b7173" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.262453 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l7bv8"] Mar 12 17:04:42 crc kubenswrapper[4687]: E0312 17:04:42.263674 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee" containerName="oc" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.263690 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee" containerName="oc" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.264045 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee" containerName="oc" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.266268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.289275 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7bv8"] Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.442117 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpfc\" (UniqueName: \"kubernetes.io/projected/ce59034f-a8f5-4aed-a73a-953613d3026a-kube-api-access-xfpfc\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.442347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-catalog-content\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.442445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-utilities\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.544562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-catalog-content\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.544700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-utilities\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.545136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-catalog-content\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.545196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-utilities\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.545405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpfc\" (UniqueName: \"kubernetes.io/projected/ce59034f-a8f5-4aed-a73a-953613d3026a-kube-api-access-xfpfc\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.566213 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpfc\" (UniqueName: \"kubernetes.io/projected/ce59034f-a8f5-4aed-a73a-953613d3026a-kube-api-access-xfpfc\") pod \"redhat-marketplace-l7bv8\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:42 crc kubenswrapper[4687]: I0312 17:04:42.589068 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:43 crc kubenswrapper[4687]: I0312 17:04:43.098617 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7bv8"] Mar 12 17:04:43 crc kubenswrapper[4687]: I0312 17:04:43.147257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerStarted","Data":"83c4ae0cdf6d70105673d9cb92de3e85ad5d84e9d2a9441873fdfe09a6c31df1"} Mar 12 17:04:44 crc kubenswrapper[4687]: I0312 17:04:44.161258 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerID="4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100" exitCode=0 Mar 12 17:04:44 crc kubenswrapper[4687]: I0312 17:04:44.161323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerDied","Data":"4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100"} Mar 12 17:04:46 crc kubenswrapper[4687]: I0312 17:04:46.186858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerStarted","Data":"dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a"} Mar 12 17:04:47 crc kubenswrapper[4687]: I0312 17:04:47.199510 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerID="dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a" exitCode=0 Mar 12 17:04:47 crc kubenswrapper[4687]: I0312 17:04:47.199555 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerDied","Data":"dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a"} Mar 12 17:04:47 crc kubenswrapper[4687]: I0312 17:04:47.734802 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:04:47 crc kubenswrapper[4687]: E0312 17:04:47.735383 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:04:48 crc kubenswrapper[4687]: I0312 17:04:48.212060 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerStarted","Data":"34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7"} Mar 12 17:04:48 crc kubenswrapper[4687]: I0312 17:04:48.240575 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l7bv8" podStartSLOduration=2.804151849 podStartE2EDuration="6.240544011s" podCreationTimestamp="2026-03-12 17:04:42 +0000 UTC" firstStartedPulling="2026-03-12 17:04:44.165765239 +0000 UTC m=+3733.129727583" lastFinishedPulling="2026-03-12 17:04:47.602157381 +0000 UTC m=+3736.566119745" observedRunningTime="2026-03-12 17:04:48.231419812 +0000 UTC m=+3737.195382156" watchObservedRunningTime="2026-03-12 17:04:48.240544011 +0000 UTC m=+3737.204506385" Mar 12 17:04:52 crc kubenswrapper[4687]: I0312 17:04:52.589867 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:52 crc kubenswrapper[4687]: I0312 17:04:52.590718 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:52 crc kubenswrapper[4687]: I0312 17:04:52.640330 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:53 crc kubenswrapper[4687]: I0312 17:04:53.309227 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:53 crc kubenswrapper[4687]: I0312 17:04:53.366150 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7bv8"] Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.287324 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l7bv8" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="registry-server" containerID="cri-o://34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7" gracePeriod=2 Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.818504 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.975286 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfpfc\" (UniqueName: \"kubernetes.io/projected/ce59034f-a8f5-4aed-a73a-953613d3026a-kube-api-access-xfpfc\") pod \"ce59034f-a8f5-4aed-a73a-953613d3026a\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.975514 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-catalog-content\") pod \"ce59034f-a8f5-4aed-a73a-953613d3026a\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.975810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-utilities\") pod \"ce59034f-a8f5-4aed-a73a-953613d3026a\" (UID: \"ce59034f-a8f5-4aed-a73a-953613d3026a\") " Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.976497 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-utilities" (OuterVolumeSpecName: "utilities") pod "ce59034f-a8f5-4aed-a73a-953613d3026a" (UID: "ce59034f-a8f5-4aed-a73a-953613d3026a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:04:55 crc kubenswrapper[4687]: I0312 17:04:55.983401 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce59034f-a8f5-4aed-a73a-953613d3026a-kube-api-access-xfpfc" (OuterVolumeSpecName: "kube-api-access-xfpfc") pod "ce59034f-a8f5-4aed-a73a-953613d3026a" (UID: "ce59034f-a8f5-4aed-a73a-953613d3026a"). InnerVolumeSpecName "kube-api-access-xfpfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.002883 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce59034f-a8f5-4aed-a73a-953613d3026a" (UID: "ce59034f-a8f5-4aed-a73a-953613d3026a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.080724 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.081237 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfpfc\" (UniqueName: \"kubernetes.io/projected/ce59034f-a8f5-4aed-a73a-953613d3026a-kube-api-access-xfpfc\") on node \"crc\" DevicePath \"\"" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.081287 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce59034f-a8f5-4aed-a73a-953613d3026a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.303059 4687 generic.go:334] "Generic (PLEG): container finished" podID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerID="34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7" exitCode=0 Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.303104 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerDied","Data":"34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7"} Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.303132 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l7bv8" event={"ID":"ce59034f-a8f5-4aed-a73a-953613d3026a","Type":"ContainerDied","Data":"83c4ae0cdf6d70105673d9cb92de3e85ad5d84e9d2a9441873fdfe09a6c31df1"} Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.303148 4687 scope.go:117] "RemoveContainer" containerID="34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.303309 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l7bv8" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.331318 4687 scope.go:117] "RemoveContainer" containerID="dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.344743 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7bv8"] Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.356079 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l7bv8"] Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.361468 4687 scope.go:117] "RemoveContainer" containerID="4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.410969 4687 scope.go:117] "RemoveContainer" containerID="34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7" Mar 12 17:04:56 crc kubenswrapper[4687]: E0312 17:04:56.411528 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7\": container with ID starting with 34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7 not found: ID does not exist" containerID="34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.411563 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7"} err="failed to get container status \"34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7\": rpc error: code = NotFound desc = could not find container \"34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7\": container with ID starting with 34b08173c2e6cea4475e47b8e2571fc1f9b832443142de00055a7fbbee4917d7 not found: ID does not exist" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.411586 4687 scope.go:117] "RemoveContainer" containerID="dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a" Mar 12 17:04:56 crc kubenswrapper[4687]: E0312 17:04:56.411921 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a\": container with ID starting with dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a not found: ID does not exist" containerID="dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.411962 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a"} err="failed to get container status \"dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a\": rpc error: code = NotFound desc = could not find container \"dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a\": container with ID starting with dd3ec683241acab327d0cf7f9554ab69522b44376e012302cce2079cc64de26a not found: ID does not exist" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.411988 4687 scope.go:117] "RemoveContainer" containerID="4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100" Mar 12 17:04:56 crc kubenswrapper[4687]: E0312 17:04:56.412305 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100\": container with ID starting with 4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100 not found: ID does not exist" containerID="4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100" Mar 12 17:04:56 crc kubenswrapper[4687]: I0312 17:04:56.412347 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100"} err="failed to get container status \"4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100\": rpc error: code = NotFound desc = could not find container \"4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100\": container with ID starting with 4e02b23129a4dff28de858f61293aea16a25ce0777671b240073fd0e919fa100 not found: ID does not exist" Mar 12 17:04:57 crc kubenswrapper[4687]: I0312 17:04:57.753964 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" path="/var/lib/kubelet/pods/ce59034f-a8f5-4aed-a73a-953613d3026a/volumes" Mar 12 17:05:00 crc kubenswrapper[4687]: I0312 17:05:00.737127 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:05:00 crc kubenswrapper[4687]: E0312 17:05:00.738350 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:05:11 crc kubenswrapper[4687]: I0312 17:05:11.740421 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:05:11 crc kubenswrapper[4687]: E0312 17:05:11.741415 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:05:24 crc kubenswrapper[4687]: I0312 17:05:24.733935 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:05:24 crc kubenswrapper[4687]: E0312 17:05:24.734710 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:05:38 crc kubenswrapper[4687]: I0312 17:05:38.733948 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:05:38 crc kubenswrapper[4687]: E0312 17:05:38.734861 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:05:51 crc kubenswrapper[4687]: I0312 17:05:51.741821 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:05:51 crc kubenswrapper[4687]: E0312 17:05:51.742748 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.146123 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555586-8dm5j"] Mar 12 17:06:00 crc kubenswrapper[4687]: E0312 17:06:00.148146 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="registry-server" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.148235 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="registry-server" Mar 12 17:06:00 crc kubenswrapper[4687]: E0312 17:06:00.148320 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="extract-content" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.148397 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="extract-content" Mar 12 17:06:00 crc kubenswrapper[4687]: E0312 17:06:00.148473 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="extract-utilities" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.148541 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="extract-utilities" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.148895 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce59034f-a8f5-4aed-a73a-953613d3026a" containerName="registry-server" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.149812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.152354 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.152527 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.153919 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.157945 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555586-8dm5j"] Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.256155 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jz2l\" (UniqueName: \"kubernetes.io/projected/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5-kube-api-access-8jz2l\") pod \"auto-csr-approver-29555586-8dm5j\" (UID: \"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5\") " pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.359140 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jz2l\" (UniqueName: \"kubernetes.io/projected/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5-kube-api-access-8jz2l\") pod \"auto-csr-approver-29555586-8dm5j\" (UID: \"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5\") " pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.381890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jz2l\" (UniqueName: \"kubernetes.io/projected/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5-kube-api-access-8jz2l\") pod \"auto-csr-approver-29555586-8dm5j\" (UID: \"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5\") " pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.485027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:00 crc kubenswrapper[4687]: I0312 17:06:00.963247 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555586-8dm5j"] Mar 12 17:06:01 crc kubenswrapper[4687]: I0312 17:06:01.976696 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" event={"ID":"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5","Type":"ContainerStarted","Data":"86cf74491f56f4b4cbe34f7be97f9c0f71bc3f7612c7fb44801f836786e95d69"} Mar 12 17:06:03 crc kubenswrapper[4687]: I0312 17:06:03.010801 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" event={"ID":"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5","Type":"ContainerStarted","Data":"655ff34c4f630be2017f0849cb630c007aa644978f4135e3874c02537237ab64"} Mar 12 17:06:03 crc kubenswrapper[4687]: I0312 17:06:03.049899 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" podStartSLOduration=1.516846808 podStartE2EDuration="3.049872124s" podCreationTimestamp="2026-03-12 17:06:00 +0000 UTC" firstStartedPulling="2026-03-12 17:06:00.965055109 +0000 UTC m=+3809.929017453" lastFinishedPulling="2026-03-12 17:06:02.498080425 +0000 UTC m=+3811.462042769" observedRunningTime="2026-03-12 17:06:03.040673222 +0000 UTC m=+3812.004635576" watchObservedRunningTime="2026-03-12 17:06:03.049872124 +0000 UTC m=+3812.013834498" Mar 12 17:06:04 crc kubenswrapper[4687]: I0312 17:06:04.022182 4687 generic.go:334] "Generic (PLEG): container finished" podID="f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5" containerID="655ff34c4f630be2017f0849cb630c007aa644978f4135e3874c02537237ab64" exitCode=0 Mar 12 17:06:04 crc kubenswrapper[4687]: I0312 17:06:04.022236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" event={"ID":"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5","Type":"ContainerDied","Data":"655ff34c4f630be2017f0849cb630c007aa644978f4135e3874c02537237ab64"} Mar 12 17:06:04 crc kubenswrapper[4687]: I0312 17:06:04.733126 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:06:04 crc kubenswrapper[4687]: E0312 17:06:04.733461 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:06:05 crc kubenswrapper[4687]: I0312 17:06:05.512701 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:05 crc kubenswrapper[4687]: I0312 17:06:05.601102 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jz2l\" (UniqueName: \"kubernetes.io/projected/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5-kube-api-access-8jz2l\") pod \"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5\" (UID: \"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5\") " Mar 12 17:06:05 crc kubenswrapper[4687]: I0312 17:06:05.610443 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5-kube-api-access-8jz2l" (OuterVolumeSpecName: "kube-api-access-8jz2l") pod "f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5" (UID: "f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5"). InnerVolumeSpecName "kube-api-access-8jz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:06:05 crc kubenswrapper[4687]: I0312 17:06:05.704405 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jz2l\" (UniqueName: \"kubernetes.io/projected/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5-kube-api-access-8jz2l\") on node \"crc\" DevicePath \"\"" Mar 12 17:06:06 crc kubenswrapper[4687]: I0312 17:06:06.044815 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" event={"ID":"f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5","Type":"ContainerDied","Data":"86cf74491f56f4b4cbe34f7be97f9c0f71bc3f7612c7fb44801f836786e95d69"} Mar 12 17:06:06 crc kubenswrapper[4687]: I0312 17:06:06.045074 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cf74491f56f4b4cbe34f7be97f9c0f71bc3f7612c7fb44801f836786e95d69" Mar 12 17:06:06 crc kubenswrapper[4687]: I0312 17:06:06.044871 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555586-8dm5j" Mar 12 17:06:06 crc kubenswrapper[4687]: I0312 17:06:06.599309 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555580-77nr7"] Mar 12 17:06:06 crc kubenswrapper[4687]: I0312 17:06:06.611990 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555580-77nr7"] Mar 12 17:06:07 crc kubenswrapper[4687]: I0312 17:06:07.746623 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988076ed-736a-4a8f-b3a6-fef130be728f" path="/var/lib/kubelet/pods/988076ed-736a-4a8f-b3a6-fef130be728f/volumes" Mar 12 17:06:12 crc kubenswrapper[4687]: E0312 17:06:12.829985 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:53220->38.102.83.38:46595: write tcp 38.102.83.38:53220->38.102.83.38:46595: write: connection reset by peer Mar 12 17:06:15 crc kubenswrapper[4687]: I0312 17:06:15.733351 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:06:15 crc kubenswrapper[4687]: E0312 17:06:15.734203 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:06:27 crc kubenswrapper[4687]: I0312 17:06:27.733088 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:06:27 crc kubenswrapper[4687]: E0312 17:06:27.733757 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:06:35 crc kubenswrapper[4687]: I0312 17:06:35.740227 4687 scope.go:117] "RemoveContainer" containerID="38163a90608dcbed85a16b3de83f08705e1aa5792559def6e958ef9d4cf29aa3" Mar 12 17:06:38 crc kubenswrapper[4687]: I0312 17:06:38.733640 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:06:38 crc kubenswrapper[4687]: E0312 17:06:38.734488 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:06:53 crc kubenswrapper[4687]: I0312 17:06:53.734019 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:06:53 crc kubenswrapper[4687]: E0312 17:06:53.735679 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:07:08 crc kubenswrapper[4687]: I0312 17:07:08.732994 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:07:08 crc kubenswrapper[4687]: E0312 17:07:08.733894 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:07:17 crc kubenswrapper[4687]: E0312 17:07:17.858752 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:41928->38.102.83.38:46595: write tcp 38.102.83.38:41928->38.102.83.38:46595: write: broken pipe Mar 12 17:07:20 crc kubenswrapper[4687]: I0312 17:07:20.733592 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:07:20 crc kubenswrapper[4687]: E0312 17:07:20.734287 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.565312 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njpk2"] Mar 12 17:07:21 crc kubenswrapper[4687]: E0312 17:07:21.565928 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5" containerName="oc" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.565947 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5" containerName="oc" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.566204 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5" containerName="oc" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.567989 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.587787 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njpk2"] Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.621952 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-utilities\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.621992 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-catalog-content\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.622028 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvlk\" (UniqueName: \"kubernetes.io/projected/862cf014-94d5-4ebd-9e9d-06586b794944-kube-api-access-vsvlk\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.724566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-utilities\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.724610 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-catalog-content\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.724656 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsvlk\" (UniqueName: \"kubernetes.io/projected/862cf014-94d5-4ebd-9e9d-06586b794944-kube-api-access-vsvlk\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.725782 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-utilities\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.725863 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-catalog-content\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.874960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsvlk\" (UniqueName: \"kubernetes.io/projected/862cf014-94d5-4ebd-9e9d-06586b794944-kube-api-access-vsvlk\") pod \"certified-operators-njpk2\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:21 crc kubenswrapper[4687]: I0312 17:07:21.892220 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:22 crc kubenswrapper[4687]: I0312 17:07:22.537702 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njpk2"] Mar 12 17:07:22 crc kubenswrapper[4687]: W0312 17:07:22.540455 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod862cf014_94d5_4ebd_9e9d_06586b794944.slice/crio-5d4ccb94170adb8a45d50effb049f820542b8cd8e92b5978c110daec2267fc23 WatchSource:0}: Error finding container 5d4ccb94170adb8a45d50effb049f820542b8cd8e92b5978c110daec2267fc23: Status 404 returned error can't find the container with id 5d4ccb94170adb8a45d50effb049f820542b8cd8e92b5978c110daec2267fc23 Mar 12 17:07:22 crc kubenswrapper[4687]: I0312 17:07:22.871836 4687 generic.go:334] "Generic (PLEG): container finished" podID="862cf014-94d5-4ebd-9e9d-06586b794944" containerID="33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14" exitCode=0 Mar 12 17:07:22 crc kubenswrapper[4687]: I0312 17:07:22.871882 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerDied","Data":"33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14"} Mar 12 17:07:22 crc kubenswrapper[4687]: I0312 17:07:22.871910 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerStarted","Data":"5d4ccb94170adb8a45d50effb049f820542b8cd8e92b5978c110daec2267fc23"} Mar 12 17:07:22 crc kubenswrapper[4687]: I0312 17:07:22.877278 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 17:07:24 crc kubenswrapper[4687]: I0312 17:07:24.901764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerStarted","Data":"ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377"} Mar 12 17:07:26 crc kubenswrapper[4687]: I0312 17:07:26.937745 4687 generic.go:334] "Generic (PLEG): container finished" podID="862cf014-94d5-4ebd-9e9d-06586b794944" containerID="ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377" exitCode=0 Mar 12 17:07:26 crc kubenswrapper[4687]: I0312 17:07:26.937857 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerDied","Data":"ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377"} Mar 12 17:07:27 crc kubenswrapper[4687]: I0312 17:07:27.951100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerStarted","Data":"2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c"} Mar 12 17:07:27 crc kubenswrapper[4687]: I0312 17:07:27.976326 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njpk2" podStartSLOduration=2.418181198 podStartE2EDuration="6.97630298s" podCreationTimestamp="2026-03-12 17:07:21 +0000 UTC" firstStartedPulling="2026-03-12 17:07:22.876634168 +0000 UTC m=+3891.840596512" lastFinishedPulling="2026-03-12 17:07:27.43475595 +0000 UTC m=+3896.398718294" observedRunningTime="2026-03-12 17:07:27.966903665 +0000 UTC m=+3896.930866009" watchObservedRunningTime="2026-03-12 17:07:27.97630298 +0000 UTC m=+3896.940265324" Mar 12 17:07:31 crc kubenswrapper[4687]: I0312 17:07:31.893068 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:31 crc kubenswrapper[4687]: I0312 17:07:31.893618 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:31 crc kubenswrapper[4687]: I0312 17:07:31.952180 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:33 crc kubenswrapper[4687]: I0312 17:07:33.732873 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:07:33 crc kubenswrapper[4687]: E0312 17:07:33.733432 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:07:39 crc kubenswrapper[4687]: I0312 17:07:39.180501 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7b9d7fc5b5-76d88" podUID="0bbd130e-9a81-466f-8d89-79c2fa5fdc4c" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.004232 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qxzxt"] Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.007231 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.020260 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxzxt"] Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.106594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgk9d\" (UniqueName: \"kubernetes.io/projected/cccc00c5-85a9-4bec-a026-6f69a13b77c2-kube-api-access-wgk9d\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.106966 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-catalog-content\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.107033 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-utilities\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.209577 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgk9d\" (UniqueName: \"kubernetes.io/projected/cccc00c5-85a9-4bec-a026-6f69a13b77c2-kube-api-access-wgk9d\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.209772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-catalog-content\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.209803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-utilities\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.210371 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-utilities\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.210947 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-catalog-content\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.236993 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgk9d\" (UniqueName: \"kubernetes.io/projected/cccc00c5-85a9-4bec-a026-6f69a13b77c2-kube-api-access-wgk9d\") pod \"redhat-operators-qxzxt\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.329216 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.900269 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxzxt"] Mar 12 17:07:41 crc kubenswrapper[4687]: I0312 17:07:41.997603 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:42 crc kubenswrapper[4687]: I0312 17:07:42.099312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerStarted","Data":"c31e54252b3456173f415e9caaf4ac9c3e390240853c1e9ef3b32b5668d3a920"} Mar 12 17:07:43 crc kubenswrapper[4687]: I0312 17:07:43.116192 4687 generic.go:334] "Generic (PLEG): container finished" podID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerID="5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f" exitCode=0 Mar 12 17:07:43 crc kubenswrapper[4687]: I0312 17:07:43.116260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerDied","Data":"5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f"} Mar 12 17:07:44 crc kubenswrapper[4687]: I0312 17:07:44.375995 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njpk2"] Mar 12 17:07:44 crc kubenswrapper[4687]: I0312 17:07:44.376633 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njpk2" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="registry-server" containerID="cri-o://2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c" gracePeriod=2 Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.091470 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.139303 4687 generic.go:334] "Generic (PLEG): container finished" podID="862cf014-94d5-4ebd-9e9d-06586b794944" containerID="2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c" exitCode=0 Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.139398 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njpk2" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.139404 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerDied","Data":"2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c"} Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.139463 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njpk2" event={"ID":"862cf014-94d5-4ebd-9e9d-06586b794944","Type":"ContainerDied","Data":"5d4ccb94170adb8a45d50effb049f820542b8cd8e92b5978c110daec2267fc23"} Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.139487 4687 scope.go:117] "RemoveContainer" containerID="2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.145158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerStarted","Data":"509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079"} Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.187658 4687 scope.go:117] "RemoveContainer" containerID="ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.209347 4687 scope.go:117] "RemoveContainer" containerID="33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.236636 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-utilities\") pod \"862cf014-94d5-4ebd-9e9d-06586b794944\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.236800 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-catalog-content\") pod \"862cf014-94d5-4ebd-9e9d-06586b794944\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.237924 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsvlk\" (UniqueName: \"kubernetes.io/projected/862cf014-94d5-4ebd-9e9d-06586b794944-kube-api-access-vsvlk\") pod \"862cf014-94d5-4ebd-9e9d-06586b794944\" (UID: \"862cf014-94d5-4ebd-9e9d-06586b794944\") " Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.241874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-utilities" (OuterVolumeSpecName: "utilities") pod "862cf014-94d5-4ebd-9e9d-06586b794944" (UID: "862cf014-94d5-4ebd-9e9d-06586b794944"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.246977 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862cf014-94d5-4ebd-9e9d-06586b794944-kube-api-access-vsvlk" (OuterVolumeSpecName: "kube-api-access-vsvlk") pod "862cf014-94d5-4ebd-9e9d-06586b794944" (UID: "862cf014-94d5-4ebd-9e9d-06586b794944"). InnerVolumeSpecName "kube-api-access-vsvlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.319566 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "862cf014-94d5-4ebd-9e9d-06586b794944" (UID: "862cf014-94d5-4ebd-9e9d-06586b794944"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.336848 4687 scope.go:117] "RemoveContainer" containerID="2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c" Mar 12 17:07:45 crc kubenswrapper[4687]: E0312 17:07:45.337465 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c\": container with ID starting with 2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c not found: ID does not exist" containerID="2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.337493 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c"} err="failed to get container status \"2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c\": rpc error: code = NotFound desc = could not find container \"2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c\": container with ID starting with 2793a619677b70a8b6953cf1839e7726859048dda73bf9963173344b94fd523c not found: ID does not exist" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.337512 4687 scope.go:117] "RemoveContainer" containerID="ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377" Mar 12 17:07:45 crc kubenswrapper[4687]: E0312 17:07:45.337812 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377\": container with ID starting with ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377 not found: ID does not exist" containerID="ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.337836 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377"} err="failed to get container status \"ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377\": rpc error: code = NotFound desc = could not find container \"ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377\": container with ID starting with ae31ffa9cf5c59089224b5c7e82c9828ad5e8e171633e7ca28efe80910592377 not found: ID does not exist" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.337849 4687 scope.go:117] "RemoveContainer" containerID="33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14" Mar 12 17:07:45 crc kubenswrapper[4687]: E0312 17:07:45.338202 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14\": container with ID starting with 33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14 not found: ID does not exist" containerID="33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.338220 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14"} err="failed to get container status \"33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14\": rpc error: code = NotFound desc = could not find container \"33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14\": container with ID starting with 33a648b8d959a55e0f5a49038cb85efc0913bfe6651e505d7d2e9784a4bfbf14 not found: ID does not exist" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.341057 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsvlk\" (UniqueName: \"kubernetes.io/projected/862cf014-94d5-4ebd-9e9d-06586b794944-kube-api-access-vsvlk\") on node \"crc\" DevicePath \"\"" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.341075 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.341084 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862cf014-94d5-4ebd-9e9d-06586b794944-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.482052 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njpk2"] Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.497724 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njpk2"] Mar 12 17:07:45 crc kubenswrapper[4687]: I0312 17:07:45.750262 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" path="/var/lib/kubelet/pods/862cf014-94d5-4ebd-9e9d-06586b794944/volumes" Mar 12 17:07:46 crc kubenswrapper[4687]: I0312 17:07:46.733186 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:07:46 crc kubenswrapper[4687]: E0312 17:07:46.733714 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:07:50 crc kubenswrapper[4687]: I0312 17:07:50.211093 4687 generic.go:334] "Generic (PLEG): container finished" podID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerID="509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079" exitCode=0 Mar 12 17:07:50 crc kubenswrapper[4687]: I0312 17:07:50.211159 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerDied","Data":"509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079"} Mar 12 17:07:51 crc kubenswrapper[4687]: I0312 17:07:51.226434 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerStarted","Data":"9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7"} Mar 12 17:07:51 crc kubenswrapper[4687]: I0312 17:07:51.256802 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qxzxt" podStartSLOduration=3.528487715 podStartE2EDuration="11.256783862s" podCreationTimestamp="2026-03-12 17:07:40 +0000 UTC" firstStartedPulling="2026-03-12 17:07:43.11966239 +0000 UTC m=+3912.083624734" lastFinishedPulling="2026-03-12 17:07:50.847958537 +0000 UTC m=+3919.811920881" observedRunningTime="2026-03-12 17:07:51.247282845 +0000 UTC m=+3920.211245189" watchObservedRunningTime="2026-03-12 17:07:51.256783862 +0000 UTC m=+3920.220746206" Mar 12 17:07:51 crc kubenswrapper[4687]: I0312 17:07:51.329933 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:51 crc kubenswrapper[4687]: I0312 17:07:51.329977 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:07:52 crc kubenswrapper[4687]: I0312 17:07:52.379692 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxzxt" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="registry-server" probeResult="failure" output=< Mar 12 17:07:52 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:07:52 crc kubenswrapper[4687]: > Mar 12 17:07:58 crc kubenswrapper[4687]: I0312 17:07:58.732990 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:07:58 crc kubenswrapper[4687]: E0312 17:07:58.733836 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.158710 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555588-xkfwf"] Mar 12 17:08:00 crc kubenswrapper[4687]: E0312 17:08:00.159678 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="registry-server" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.159695 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="registry-server" Mar 12 17:08:00 crc kubenswrapper[4687]: E0312 17:08:00.159719 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="extract-content" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.159726 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="extract-content" Mar 12 17:08:00 crc kubenswrapper[4687]: E0312 17:08:00.159774 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="extract-utilities" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.159782 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="extract-utilities" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.160050 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="862cf014-94d5-4ebd-9e9d-06586b794944" containerName="registry-server" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.161003 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.163916 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.164593 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.168704 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.177947 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555588-xkfwf"] Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.234709 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmvr\" (UniqueName: \"kubernetes.io/projected/53f05d42-1887-472f-a116-9034ce051e6d-kube-api-access-rqmvr\") pod \"auto-csr-approver-29555588-xkfwf\" (UID: \"53f05d42-1887-472f-a116-9034ce051e6d\") " pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.337194 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmvr\" (UniqueName: \"kubernetes.io/projected/53f05d42-1887-472f-a116-9034ce051e6d-kube-api-access-rqmvr\") pod \"auto-csr-approver-29555588-xkfwf\" (UID: \"53f05d42-1887-472f-a116-9034ce051e6d\") " pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.359938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmvr\" (UniqueName: \"kubernetes.io/projected/53f05d42-1887-472f-a116-9034ce051e6d-kube-api-access-rqmvr\") pod \"auto-csr-approver-29555588-xkfwf\" (UID: \"53f05d42-1887-472f-a116-9034ce051e6d\") " pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:00 crc kubenswrapper[4687]: I0312 17:08:00.489811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:01 crc kubenswrapper[4687]: I0312 17:08:01.009950 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555588-xkfwf"] Mar 12 17:08:01 crc kubenswrapper[4687]: I0312 17:08:01.589578 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" event={"ID":"53f05d42-1887-472f-a116-9034ce051e6d","Type":"ContainerStarted","Data":"e03d844f89e01ee4214c944a66cee092cbdb73bf7b9d2b08c9b97b2255d1c718"} Mar 12 17:08:02 crc kubenswrapper[4687]: I0312 17:08:02.379786 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxzxt" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="registry-server" probeResult="failure" output=< Mar 12 17:08:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:08:02 crc kubenswrapper[4687]: > Mar 12 17:08:02 crc kubenswrapper[4687]: I0312 17:08:02.605361 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" event={"ID":"53f05d42-1887-472f-a116-9034ce051e6d","Type":"ContainerStarted","Data":"9972ff27e0a308206917a0495d2811a8f031d70c777129e96ad5b14cdbfeb830"} Mar 12 17:08:02 crc kubenswrapper[4687]: I0312 17:08:02.620926 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" podStartSLOduration=1.564423309 podStartE2EDuration="2.620906628s" podCreationTimestamp="2026-03-12 17:08:00 +0000 UTC" firstStartedPulling="2026-03-12 17:08:01.012869879 +0000 UTC m=+3929.976832223" lastFinishedPulling="2026-03-12 17:08:02.069353198 +0000 UTC m=+3931.033315542" observedRunningTime="2026-03-12 17:08:02.618125733 +0000 UTC m=+3931.582088097" watchObservedRunningTime="2026-03-12 17:08:02.620906628 +0000 UTC m=+3931.584868982" Mar 12 17:08:03 crc kubenswrapper[4687]: I0312 17:08:03.618570 4687 generic.go:334] "Generic (PLEG): container finished" podID="53f05d42-1887-472f-a116-9034ce051e6d" containerID="9972ff27e0a308206917a0495d2811a8f031d70c777129e96ad5b14cdbfeb830" exitCode=0 Mar 12 17:08:03 crc kubenswrapper[4687]: I0312 17:08:03.618641 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" event={"ID":"53f05d42-1887-472f-a116-9034ce051e6d","Type":"ContainerDied","Data":"9972ff27e0a308206917a0495d2811a8f031d70c777129e96ad5b14cdbfeb830"} Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.211870 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.268324 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqmvr\" (UniqueName: \"kubernetes.io/projected/53f05d42-1887-472f-a116-9034ce051e6d-kube-api-access-rqmvr\") pod \"53f05d42-1887-472f-a116-9034ce051e6d\" (UID: \"53f05d42-1887-472f-a116-9034ce051e6d\") " Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.273987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f05d42-1887-472f-a116-9034ce051e6d-kube-api-access-rqmvr" (OuterVolumeSpecName: "kube-api-access-rqmvr") pod "53f05d42-1887-472f-a116-9034ce051e6d" (UID: "53f05d42-1887-472f-a116-9034ce051e6d"). InnerVolumeSpecName "kube-api-access-rqmvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.371071 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqmvr\" (UniqueName: \"kubernetes.io/projected/53f05d42-1887-472f-a116-9034ce051e6d-kube-api-access-rqmvr\") on node \"crc\" DevicePath \"\"" Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.642197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" event={"ID":"53f05d42-1887-472f-a116-9034ce051e6d","Type":"ContainerDied","Data":"e03d844f89e01ee4214c944a66cee092cbdb73bf7b9d2b08c9b97b2255d1c718"} Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.642573 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03d844f89e01ee4214c944a66cee092cbdb73bf7b9d2b08c9b97b2255d1c718" Mar 12 17:08:05 crc kubenswrapper[4687]: I0312 17:08:05.642232 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555588-xkfwf" Mar 12 17:08:06 crc kubenswrapper[4687]: I0312 17:08:06.292100 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555582-nvh9c"] Mar 12 17:08:06 crc kubenswrapper[4687]: I0312 17:08:06.307987 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555582-nvh9c"] Mar 12 17:08:07 crc kubenswrapper[4687]: I0312 17:08:07.748173 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d826079-5d3f-46e0-b383-9c5d06803b80" path="/var/lib/kubelet/pods/5d826079-5d3f-46e0-b383-9c5d06803b80/volumes" Mar 12 17:08:10 crc kubenswrapper[4687]: I0312 17:08:10.733429 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:08:10 crc kubenswrapper[4687]: E0312 17:08:10.734182 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:08:11 crc kubenswrapper[4687]: I0312 17:08:11.377152 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:08:11 crc kubenswrapper[4687]: I0312 17:08:11.430763 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:08:12 crc kubenswrapper[4687]: I0312 17:08:12.191863 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxzxt"] Mar 12 17:08:12 crc kubenswrapper[4687]: I0312 17:08:12.714168 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qxzxt" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="registry-server" containerID="cri-o://9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7" gracePeriod=2 Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.265103 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.376698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgk9d\" (UniqueName: \"kubernetes.io/projected/cccc00c5-85a9-4bec-a026-6f69a13b77c2-kube-api-access-wgk9d\") pod \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.376799 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-catalog-content\") pod \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.376979 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-utilities\") pod \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\" (UID: \"cccc00c5-85a9-4bec-a026-6f69a13b77c2\") " Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.377817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-utilities" (OuterVolumeSpecName: "utilities") pod "cccc00c5-85a9-4bec-a026-6f69a13b77c2" (UID: "cccc00c5-85a9-4bec-a026-6f69a13b77c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.386252 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccc00c5-85a9-4bec-a026-6f69a13b77c2-kube-api-access-wgk9d" (OuterVolumeSpecName: "kube-api-access-wgk9d") pod "cccc00c5-85a9-4bec-a026-6f69a13b77c2" (UID: "cccc00c5-85a9-4bec-a026-6f69a13b77c2"). InnerVolumeSpecName "kube-api-access-wgk9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.483394 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgk9d\" (UniqueName: \"kubernetes.io/projected/cccc00c5-85a9-4bec-a026-6f69a13b77c2-kube-api-access-wgk9d\") on node \"crc\" DevicePath \"\"" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.483595 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.524495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cccc00c5-85a9-4bec-a026-6f69a13b77c2" (UID: "cccc00c5-85a9-4bec-a026-6f69a13b77c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.585973 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccc00c5-85a9-4bec-a026-6f69a13b77c2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.726051 4687 generic.go:334] "Generic (PLEG): container finished" podID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerID="9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7" exitCode=0 Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.726098 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerDied","Data":"9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7"} Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.726153 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxzxt" event={"ID":"cccc00c5-85a9-4bec-a026-6f69a13b77c2","Type":"ContainerDied","Data":"c31e54252b3456173f415e9caaf4ac9c3e390240853c1e9ef3b32b5668d3a920"} Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.726174 4687 scope.go:117] "RemoveContainer" containerID="9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.726433 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxzxt" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.758202 4687 scope.go:117] "RemoveContainer" containerID="509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.767587 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxzxt"] Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.784278 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qxzxt"] Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.798805 4687 scope.go:117] "RemoveContainer" containerID="5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.843229 4687 scope.go:117] "RemoveContainer" containerID="9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7" Mar 12 17:08:13 crc kubenswrapper[4687]: E0312 17:08:13.844153 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7\": container with ID starting with 9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7 not found: ID does not exist" containerID="9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.844188 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7"} err="failed to get container status \"9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7\": rpc error: code = NotFound desc = could not find container \"9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7\": container with ID starting with 9f629110aa59a3499cb5d2fec0bd009178ba66b4f656e68e0322b36a66870fb7 not found: ID does not exist" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.844254 4687 scope.go:117] "RemoveContainer" containerID="509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079" Mar 12 17:08:13 crc kubenswrapper[4687]: E0312 17:08:13.844766 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079\": container with ID starting with 509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079 not found: ID does not exist" containerID="509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.844820 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079"} err="failed to get container status \"509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079\": rpc error: code = NotFound desc = could not find container \"509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079\": container with ID starting with 509430d3871956a9477d27301c7ec4d0ce64e820dc1527983602c1770e108079 not found: ID does not exist" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.844846 4687 scope.go:117] "RemoveContainer" containerID="5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f" Mar 12 17:08:13 crc kubenswrapper[4687]: E0312 17:08:13.845238 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f\": container with ID starting with 5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f not found: ID does not exist" containerID="5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f" Mar 12 17:08:13 crc kubenswrapper[4687]: I0312 17:08:13.845276 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f"} err="failed to get container status \"5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f\": rpc error: code = NotFound desc = could not find container \"5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f\": container with ID starting with 5aad7993393b805cf9f395b7664aaf794910a411ddb59bfbf1502169203edb4f not found: ID does not exist" Mar 12 17:08:15 crc kubenswrapper[4687]: I0312 17:08:15.763097 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" path="/var/lib/kubelet/pods/cccc00c5-85a9-4bec-a026-6f69a13b77c2/volumes" Mar 12 17:08:25 crc kubenswrapper[4687]: I0312 17:08:25.734191 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:08:26 crc kubenswrapper[4687]: I0312 17:08:26.915158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"f1f8a6a188bd2ca7104da860801ac623dbfc4a90b30702bebae0059f6c759a97"} Mar 12 17:08:29 crc kubenswrapper[4687]: E0312 17:08:29.955787 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:36176->38.102.83.38:46595: write tcp 38.102.83.38:36176->38.102.83.38:46595: write: broken pipe Mar 12 17:08:35 crc kubenswrapper[4687]: I0312 17:08:35.878704 4687 scope.go:117] "RemoveContainer" containerID="b5edb1124b8324e100fdeddef141c60afb93f6900a04220b133c6415943e826c" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.148957 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555590-r4qwg"] Mar 12 17:10:00 crc kubenswrapper[4687]: E0312 17:10:00.149999 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="extract-content" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.150044 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="extract-content" Mar 12 17:10:00 crc kubenswrapper[4687]: E0312 17:10:00.150073 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="registry-server" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.150081 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="registry-server" Mar 12 17:10:00 crc kubenswrapper[4687]: E0312 17:10:00.150092 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f05d42-1887-472f-a116-9034ce051e6d" containerName="oc" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.150101 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f05d42-1887-472f-a116-9034ce051e6d" containerName="oc" Mar 12 17:10:00 crc kubenswrapper[4687]: E0312 17:10:00.150118 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="extract-utilities" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.150126 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="extract-utilities" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.150410 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccc00c5-85a9-4bec-a026-6f69a13b77c2" containerName="registry-server" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.150444 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f05d42-1887-472f-a116-9034ce051e6d" containerName="oc" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.151415 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.155235 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.155351 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.155624 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.162501 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555590-r4qwg"] Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.254837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxl9p\" (UniqueName: \"kubernetes.io/projected/b1078644-8717-4e2d-b681-5f3d0aee315d-kube-api-access-qxl9p\") pod \"auto-csr-approver-29555590-r4qwg\" (UID: \"b1078644-8717-4e2d-b681-5f3d0aee315d\") " pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.357958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxl9p\" (UniqueName: \"kubernetes.io/projected/b1078644-8717-4e2d-b681-5f3d0aee315d-kube-api-access-qxl9p\") pod \"auto-csr-approver-29555590-r4qwg\" (UID: \"b1078644-8717-4e2d-b681-5f3d0aee315d\") " pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.384040 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxl9p\" (UniqueName: \"kubernetes.io/projected/b1078644-8717-4e2d-b681-5f3d0aee315d-kube-api-access-qxl9p\") pod \"auto-csr-approver-29555590-r4qwg\" (UID: \"b1078644-8717-4e2d-b681-5f3d0aee315d\") " pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.473809 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:00 crc kubenswrapper[4687]: I0312 17:10:00.954092 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555590-r4qwg"] Mar 12 17:10:01 crc kubenswrapper[4687]: I0312 17:10:01.417617 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" event={"ID":"b1078644-8717-4e2d-b681-5f3d0aee315d","Type":"ContainerStarted","Data":"13198f3aa2d5d55f304bacffba095ad8cdc39ab32cff9a157f06f6646bb86669"} Mar 12 17:10:03 crc kubenswrapper[4687]: I0312 17:10:03.447740 4687 generic.go:334] "Generic (PLEG): container finished" podID="b1078644-8717-4e2d-b681-5f3d0aee315d" containerID="ec538f15ab42cbbcd7f5a2f24621678907e0e51fa08b5a3b3eb9a007ba4d3f7e" exitCode=0 Mar 12 17:10:03 crc kubenswrapper[4687]: I0312 17:10:03.447817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" event={"ID":"b1078644-8717-4e2d-b681-5f3d0aee315d","Type":"ContainerDied","Data":"ec538f15ab42cbbcd7f5a2f24621678907e0e51fa08b5a3b3eb9a007ba4d3f7e"} Mar 12 17:10:04 crc kubenswrapper[4687]: I0312 17:10:04.942264 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:05 crc kubenswrapper[4687]: I0312 17:10:05.120550 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxl9p\" (UniqueName: \"kubernetes.io/projected/b1078644-8717-4e2d-b681-5f3d0aee315d-kube-api-access-qxl9p\") pod \"b1078644-8717-4e2d-b681-5f3d0aee315d\" (UID: \"b1078644-8717-4e2d-b681-5f3d0aee315d\") " Mar 12 17:10:05 crc kubenswrapper[4687]: I0312 17:10:05.127451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1078644-8717-4e2d-b681-5f3d0aee315d-kube-api-access-qxl9p" (OuterVolumeSpecName: "kube-api-access-qxl9p") pod "b1078644-8717-4e2d-b681-5f3d0aee315d" (UID: "b1078644-8717-4e2d-b681-5f3d0aee315d"). InnerVolumeSpecName "kube-api-access-qxl9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:10:05 crc kubenswrapper[4687]: I0312 17:10:05.223013 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxl9p\" (UniqueName: \"kubernetes.io/projected/b1078644-8717-4e2d-b681-5f3d0aee315d-kube-api-access-qxl9p\") on node \"crc\" DevicePath \"\"" Mar 12 17:10:05 crc kubenswrapper[4687]: I0312 17:10:05.471348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" event={"ID":"b1078644-8717-4e2d-b681-5f3d0aee315d","Type":"ContainerDied","Data":"13198f3aa2d5d55f304bacffba095ad8cdc39ab32cff9a157f06f6646bb86669"} Mar 12 17:10:05 crc kubenswrapper[4687]: I0312 17:10:05.471411 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13198f3aa2d5d55f304bacffba095ad8cdc39ab32cff9a157f06f6646bb86669" Mar 12 17:10:05 crc kubenswrapper[4687]: I0312 17:10:05.471602 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555590-r4qwg" Mar 12 17:10:06 crc kubenswrapper[4687]: I0312 17:10:06.025897 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555584-jlwl9"] Mar 12 17:10:06 crc kubenswrapper[4687]: I0312 17:10:06.040175 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555584-jlwl9"] Mar 12 17:10:07 crc kubenswrapper[4687]: I0312 17:10:07.752139 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee" path="/var/lib/kubelet/pods/6c6b9ec0-7da1-4a99-ad4f-5e3a69b1e4ee/volumes" Mar 12 17:10:21 crc kubenswrapper[4687]: I0312 17:10:21.945150 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4zh2j"] Mar 12 17:10:21 crc kubenswrapper[4687]: E0312 17:10:21.946174 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1078644-8717-4e2d-b681-5f3d0aee315d" containerName="oc" Mar 12 17:10:21 crc kubenswrapper[4687]: I0312 17:10:21.946188 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1078644-8717-4e2d-b681-5f3d0aee315d" containerName="oc" Mar 12 17:10:21 crc kubenswrapper[4687]: I0312 17:10:21.946504 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1078644-8717-4e2d-b681-5f3d0aee315d" containerName="oc" Mar 12 17:10:21 crc kubenswrapper[4687]: I0312 17:10:21.949679 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:21 crc kubenswrapper[4687]: I0312 17:10:21.963981 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zh2j"] Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.070716 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zfx\" (UniqueName: \"kubernetes.io/projected/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-kube-api-access-h9zfx\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.070933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-catalog-content\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.071007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-utilities\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.172619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-catalog-content\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.172722 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-utilities\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.172774 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9zfx\" (UniqueName: \"kubernetes.io/projected/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-kube-api-access-h9zfx\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.173195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-catalog-content\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.173293 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-utilities\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.205415 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9zfx\" (UniqueName: \"kubernetes.io/projected/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-kube-api-access-h9zfx\") pod \"community-operators-4zh2j\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.270740 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:22 crc kubenswrapper[4687]: I0312 17:10:22.912933 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zh2j"] Mar 12 17:10:23 crc kubenswrapper[4687]: I0312 17:10:23.678250 4687 generic.go:334] "Generic (PLEG): container finished" podID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerID="e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9" exitCode=0 Mar 12 17:10:23 crc kubenswrapper[4687]: I0312 17:10:23.678347 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerDied","Data":"e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9"} Mar 12 17:10:23 crc kubenswrapper[4687]: I0312 17:10:23.678569 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerStarted","Data":"54f84e28e2d904ab622b09c64e22cab3f15539b867d74156310b3519275f81f7"} Mar 12 17:10:24 crc kubenswrapper[4687]: I0312 17:10:24.691701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerStarted","Data":"bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889"} Mar 12 17:10:25 crc kubenswrapper[4687]: E0312 17:10:25.842248 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacc3b0a7_b5e1_47f4_9e7d_a39520e64820.slice/crio-conmon-bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889.scope\": RecentStats: unable to find data in memory cache]" Mar 12 17:10:26 crc kubenswrapper[4687]: I0312 17:10:26.717922 4687 generic.go:334] "Generic (PLEG): container finished" podID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerID="bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889" exitCode=0 Mar 12 17:10:26 crc kubenswrapper[4687]: I0312 17:10:26.718009 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerDied","Data":"bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889"} Mar 12 17:10:27 crc kubenswrapper[4687]: I0312 17:10:27.748162 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerStarted","Data":"7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec"} Mar 12 17:10:27 crc kubenswrapper[4687]: I0312 17:10:27.782085 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4zh2j" podStartSLOduration=3.259241515 podStartE2EDuration="6.782064812s" podCreationTimestamp="2026-03-12 17:10:21 +0000 UTC" firstStartedPulling="2026-03-12 17:10:23.681378781 +0000 UTC m=+4072.645341125" lastFinishedPulling="2026-03-12 17:10:27.204202078 +0000 UTC m=+4076.168164422" observedRunningTime="2026-03-12 17:10:27.769244274 +0000 UTC m=+4076.733206618" watchObservedRunningTime="2026-03-12 17:10:27.782064812 +0000 UTC m=+4076.746027156" Mar 12 17:10:32 crc kubenswrapper[4687]: I0312 17:10:32.271174 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:32 crc kubenswrapper[4687]: I0312 17:10:32.272326 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:32 crc kubenswrapper[4687]: I0312 17:10:32.366156 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:32 crc kubenswrapper[4687]: I0312 17:10:32.840297 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:32 crc kubenswrapper[4687]: I0312 17:10:32.895305 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zh2j"] Mar 12 17:10:34 crc kubenswrapper[4687]: I0312 17:10:34.814240 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4zh2j" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="registry-server" containerID="cri-o://7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec" gracePeriod=2 Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.364511 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.516176 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-catalog-content\") pod \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.516275 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9zfx\" (UniqueName: \"kubernetes.io/projected/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-kube-api-access-h9zfx\") pod \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.516357 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-utilities\") pod \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\" (UID: \"acc3b0a7-b5e1-47f4-9e7d-a39520e64820\") " Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.517132 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-utilities" (OuterVolumeSpecName: "utilities") pod "acc3b0a7-b5e1-47f4-9e7d-a39520e64820" (UID: "acc3b0a7-b5e1-47f4-9e7d-a39520e64820"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.522122 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-kube-api-access-h9zfx" (OuterVolumeSpecName: "kube-api-access-h9zfx") pod "acc3b0a7-b5e1-47f4-9e7d-a39520e64820" (UID: "acc3b0a7-b5e1-47f4-9e7d-a39520e64820"). InnerVolumeSpecName "kube-api-access-h9zfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.562549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acc3b0a7-b5e1-47f4-9e7d-a39520e64820" (UID: "acc3b0a7-b5e1-47f4-9e7d-a39520e64820"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.619065 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9zfx\" (UniqueName: \"kubernetes.io/projected/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-kube-api-access-h9zfx\") on node \"crc\" DevicePath \"\"" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.619314 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.619419 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3b0a7-b5e1-47f4-9e7d-a39520e64820-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.830403 4687 generic.go:334] "Generic (PLEG): container finished" podID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerID="7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec" exitCode=0 Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.830478 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerDied","Data":"7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec"} Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.830527 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zh2j" event={"ID":"acc3b0a7-b5e1-47f4-9e7d-a39520e64820","Type":"ContainerDied","Data":"54f84e28e2d904ab622b09c64e22cab3f15539b867d74156310b3519275f81f7"} Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.830562 4687 scope.go:117] "RemoveContainer" containerID="7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.830803 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zh2j" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.861576 4687 scope.go:117] "RemoveContainer" containerID="bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.869593 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zh2j"] Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.887015 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4zh2j"] Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.890569 4687 scope.go:117] "RemoveContainer" containerID="e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.936163 4687 scope.go:117] "RemoveContainer" containerID="7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec" Mar 12 17:10:35 crc kubenswrapper[4687]: E0312 17:10:35.936627 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec\": container with ID starting with 7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec not found: ID does not exist" containerID="7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.936674 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec"} err="failed to get container status \"7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec\": rpc error: code = NotFound desc = could not find container \"7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec\": container with ID starting with 7beaeb8a4abdd050d33a74351ca0c7ba49e78e7e1ace70fc7b373bbc5af626ec not found: ID does not exist" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.936702 4687 scope.go:117] "RemoveContainer" containerID="bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889" Mar 12 17:10:35 crc kubenswrapper[4687]: E0312 17:10:35.937153 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889\": container with ID starting with bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889 not found: ID does not exist" containerID="bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.937188 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889"} err="failed to get container status \"bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889\": rpc error: code = NotFound desc = could not find container \"bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889\": container with ID starting with bc5bfa8b9b9482d39240dd56908bdb9dcc6514b0a619fae2a142ae0f3ea71889 not found: ID does not exist" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.937209 4687 scope.go:117] "RemoveContainer" containerID="e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9" Mar 12 17:10:35 crc kubenswrapper[4687]: E0312 17:10:35.937561 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9\": container with ID starting with e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9 not found: ID does not exist" containerID="e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9" Mar 12 17:10:35 crc kubenswrapper[4687]: I0312 17:10:35.937588 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9"} err="failed to get container status \"e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9\": rpc error: code = NotFound desc = could not find container \"e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9\": container with ID starting with e2a39ac3c4ce0c718039cdcb92a488844a2d2617a6ea6348519ef44e60b0ebd9 not found: ID does not exist" Mar 12 17:10:36 crc kubenswrapper[4687]: I0312 17:10:36.051063 4687 scope.go:117] "RemoveContainer" containerID="a1f3bf20e604a1123c0932fa9a564a05bb79f1d75ff042cb9ae5854151f90f86" Mar 12 17:10:37 crc kubenswrapper[4687]: I0312 17:10:37.750283 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" path="/var/lib/kubelet/pods/acc3b0a7-b5e1-47f4-9e7d-a39520e64820/volumes" Mar 12 17:10:44 crc kubenswrapper[4687]: I0312 17:10:44.121851 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:10:44 crc kubenswrapper[4687]: I0312 17:10:44.122659 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:11:14 crc kubenswrapper[4687]: I0312 17:11:14.122023 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:11:14 crc kubenswrapper[4687]: I0312 17:11:14.122587 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:11:22 crc kubenswrapper[4687]: E0312 17:11:22.181388 4687 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.38:45518->38.102.83.38:46595: read tcp 38.102.83.38:45518->38.102.83.38:46595: read: connection reset by peer Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.121657 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.122141 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.122184 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.123263 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1f8a6a188bd2ca7104da860801ac623dbfc4a90b30702bebae0059f6c759a97"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.123308 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://f1f8a6a188bd2ca7104da860801ac623dbfc4a90b30702bebae0059f6c759a97" gracePeriod=600 Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.614479 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="f1f8a6a188bd2ca7104da860801ac623dbfc4a90b30702bebae0059f6c759a97" exitCode=0 Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.614522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"f1f8a6a188bd2ca7104da860801ac623dbfc4a90b30702bebae0059f6c759a97"} Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.614811 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b"} Mar 12 17:11:44 crc kubenswrapper[4687]: I0312 17:11:44.614846 4687 scope.go:117] "RemoveContainer" containerID="448fcd483921cbaf78352ca5384fa32ec6296e57fee6295bce5e0ceb93bf5d5b" Mar 12 17:11:49 crc kubenswrapper[4687]: E0312 17:11:49.749486 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:49194->38.102.83.38:46595: write tcp 38.102.83.38:49194->38.102.83.38:46595: write: broken pipe Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.169071 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555592-8s27b"] Mar 12 17:12:00 crc kubenswrapper[4687]: E0312 17:12:00.170499 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="extract-content" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.170535 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="extract-content" Mar 12 17:12:00 crc kubenswrapper[4687]: E0312 17:12:00.170560 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="registry-server" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.170568 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="registry-server" Mar 12 17:12:00 crc kubenswrapper[4687]: E0312 17:12:00.170605 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="extract-utilities" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.170613 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="extract-utilities" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.171043 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc3b0a7-b5e1-47f4-9e7d-a39520e64820" containerName="registry-server" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.172108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.175612 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.175652 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.175855 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.188490 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555592-8s27b"] Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.281437 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzt8d\" (UniqueName: \"kubernetes.io/projected/48185718-1b64-44ea-9e80-a365e6b32303-kube-api-access-hzt8d\") pod \"auto-csr-approver-29555592-8s27b\" (UID: \"48185718-1b64-44ea-9e80-a365e6b32303\") " pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.384460 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzt8d\" (UniqueName: \"kubernetes.io/projected/48185718-1b64-44ea-9e80-a365e6b32303-kube-api-access-hzt8d\") pod \"auto-csr-approver-29555592-8s27b\" (UID: \"48185718-1b64-44ea-9e80-a365e6b32303\") " pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.414165 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzt8d\" (UniqueName: \"kubernetes.io/projected/48185718-1b64-44ea-9e80-a365e6b32303-kube-api-access-hzt8d\") pod \"auto-csr-approver-29555592-8s27b\" (UID: \"48185718-1b64-44ea-9e80-a365e6b32303\") " pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:00 crc kubenswrapper[4687]: I0312 17:12:00.515785 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:01 crc kubenswrapper[4687]: I0312 17:12:01.026958 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555592-8s27b"] Mar 12 17:12:01 crc kubenswrapper[4687]: I0312 17:12:01.814419 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555592-8s27b" event={"ID":"48185718-1b64-44ea-9e80-a365e6b32303","Type":"ContainerStarted","Data":"20c93c5eb2675d0365356fd988ab347ee61d1343fc2f91e56068c0266ef6da1d"} Mar 12 17:12:02 crc kubenswrapper[4687]: I0312 17:12:02.862275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555592-8s27b" event={"ID":"48185718-1b64-44ea-9e80-a365e6b32303","Type":"ContainerStarted","Data":"86fd0a76f1850c76f237b92d48d7db7404db1156431e7c7435df67ce9e783a05"} Mar 12 17:12:02 crc kubenswrapper[4687]: I0312 17:12:02.886295 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555592-8s27b" podStartSLOduration=1.679580698 podStartE2EDuration="2.886276116s" podCreationTimestamp="2026-03-12 17:12:00 +0000 UTC" firstStartedPulling="2026-03-12 17:12:01.031125283 +0000 UTC m=+4169.995087627" lastFinishedPulling="2026-03-12 17:12:02.237820701 +0000 UTC m=+4171.201783045" observedRunningTime="2026-03-12 17:12:02.877655823 +0000 UTC m=+4171.841618187" watchObservedRunningTime="2026-03-12 17:12:02.886276116 +0000 UTC m=+4171.850238470" Mar 12 17:12:03 crc kubenswrapper[4687]: I0312 17:12:03.873557 4687 generic.go:334] "Generic (PLEG): container finished" podID="48185718-1b64-44ea-9e80-a365e6b32303" containerID="86fd0a76f1850c76f237b92d48d7db7404db1156431e7c7435df67ce9e783a05" exitCode=0 Mar 12 17:12:03 crc kubenswrapper[4687]: I0312 17:12:03.873624 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555592-8s27b" event={"ID":"48185718-1b64-44ea-9e80-a365e6b32303","Type":"ContainerDied","Data":"86fd0a76f1850c76f237b92d48d7db7404db1156431e7c7435df67ce9e783a05"} Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.766629 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.855441 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzt8d\" (UniqueName: \"kubernetes.io/projected/48185718-1b64-44ea-9e80-a365e6b32303-kube-api-access-hzt8d\") pod \"48185718-1b64-44ea-9e80-a365e6b32303\" (UID: \"48185718-1b64-44ea-9e80-a365e6b32303\") " Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.863035 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48185718-1b64-44ea-9e80-a365e6b32303-kube-api-access-hzt8d" (OuterVolumeSpecName: "kube-api-access-hzt8d") pod "48185718-1b64-44ea-9e80-a365e6b32303" (UID: "48185718-1b64-44ea-9e80-a365e6b32303"). InnerVolumeSpecName "kube-api-access-hzt8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.907145 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555592-8s27b" event={"ID":"48185718-1b64-44ea-9e80-a365e6b32303","Type":"ContainerDied","Data":"20c93c5eb2675d0365356fd988ab347ee61d1343fc2f91e56068c0266ef6da1d"} Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.907190 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20c93c5eb2675d0365356fd988ab347ee61d1343fc2f91e56068c0266ef6da1d" Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.907201 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555592-8s27b" Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.958388 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzt8d\" (UniqueName: \"kubernetes.io/projected/48185718-1b64-44ea-9e80-a365e6b32303-kube-api-access-hzt8d\") on node \"crc\" DevicePath \"\"" Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.958939 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555586-8dm5j"] Mar 12 17:12:05 crc kubenswrapper[4687]: I0312 17:12:05.971221 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555586-8dm5j"] Mar 12 17:12:07 crc kubenswrapper[4687]: I0312 17:12:07.763834 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5" path="/var/lib/kubelet/pods/f15b36d4-f3bd-4e3f-9ab9-d8688e1689d5/volumes" Mar 12 17:12:36 crc kubenswrapper[4687]: I0312 17:12:36.186587 4687 scope.go:117] "RemoveContainer" containerID="655ff34c4f630be2017f0849cb630c007aa644978f4135e3874c02537237ab64" Mar 12 17:13:44 crc kubenswrapper[4687]: I0312 17:13:44.121543 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:13:44 crc kubenswrapper[4687]: I0312 17:13:44.121948 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.155769 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555594-7pkcf"] Mar 12 17:14:00 crc kubenswrapper[4687]: E0312 17:14:00.156943 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48185718-1b64-44ea-9e80-a365e6b32303" containerName="oc" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.156992 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="48185718-1b64-44ea-9e80-a365e6b32303" containerName="oc" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.157345 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="48185718-1b64-44ea-9e80-a365e6b32303" containerName="oc" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.158424 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.161667 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.161892 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.162093 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.181343 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555594-7pkcf"] Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.200405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gnn\" (UniqueName: \"kubernetes.io/projected/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f-kube-api-access-p7gnn\") pod \"auto-csr-approver-29555594-7pkcf\" (UID: \"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f\") " pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.302443 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gnn\" (UniqueName: \"kubernetes.io/projected/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f-kube-api-access-p7gnn\") pod \"auto-csr-approver-29555594-7pkcf\" (UID: \"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f\") " pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.325951 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gnn\" (UniqueName: \"kubernetes.io/projected/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f-kube-api-access-p7gnn\") pod \"auto-csr-approver-29555594-7pkcf\" (UID: \"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f\") " pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.475444 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.970686 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555594-7pkcf"] Mar 12 17:14:00 crc kubenswrapper[4687]: I0312 17:14:00.973794 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 17:14:01 crc kubenswrapper[4687]: I0312 17:14:01.168492 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" event={"ID":"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f","Type":"ContainerStarted","Data":"d17e4f7876d3c4f5d14d563c3080caa605e2bad6275b3671178dea26d72f7008"} Mar 12 17:14:03 crc kubenswrapper[4687]: I0312 17:14:03.189312 4687 generic.go:334] "Generic (PLEG): container finished" podID="568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f" containerID="d83f5a86ad6f7428727a435dd81e20945b76eabad8961f2a983b09ee38248cb9" exitCode=0 Mar 12 17:14:03 crc kubenswrapper[4687]: I0312 17:14:03.189589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" event={"ID":"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f","Type":"ContainerDied","Data":"d83f5a86ad6f7428727a435dd81e20945b76eabad8961f2a983b09ee38248cb9"} Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.317557 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.357547 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7gnn\" (UniqueName: \"kubernetes.io/projected/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f-kube-api-access-p7gnn\") pod \"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f\" (UID: \"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f\") " Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.363391 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f-kube-api-access-p7gnn" (OuterVolumeSpecName: "kube-api-access-p7gnn") pod "568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f" (UID: "568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f"). InnerVolumeSpecName "kube-api-access-p7gnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.461067 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7gnn\" (UniqueName: \"kubernetes.io/projected/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f-kube-api-access-p7gnn\") on node \"crc\" DevicePath \"\"" Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.878938 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" event={"ID":"568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f","Type":"ContainerDied","Data":"d17e4f7876d3c4f5d14d563c3080caa605e2bad6275b3671178dea26d72f7008"} Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.879200 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d17e4f7876d3c4f5d14d563c3080caa605e2bad6275b3671178dea26d72f7008" Mar 12 17:14:05 crc kubenswrapper[4687]: I0312 17:14:05.879065 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555594-7pkcf" Mar 12 17:14:06 crc kubenswrapper[4687]: I0312 17:14:06.400593 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555588-xkfwf"] Mar 12 17:14:06 crc kubenswrapper[4687]: I0312 17:14:06.418424 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555588-xkfwf"] Mar 12 17:14:07 crc kubenswrapper[4687]: I0312 17:14:07.748477 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f05d42-1887-472f-a116-9034ce051e6d" path="/var/lib/kubelet/pods/53f05d42-1887-472f-a116-9034ce051e6d/volumes" Mar 12 17:14:14 crc kubenswrapper[4687]: I0312 17:14:14.121544 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:14:14 crc kubenswrapper[4687]: I0312 17:14:14.122106 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:14:36 crc kubenswrapper[4687]: I0312 17:14:36.325450 4687 scope.go:117] "RemoveContainer" containerID="9972ff27e0a308206917a0495d2811a8f031d70c777129e96ad5b14cdbfeb830" Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.121208 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.121689 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.121728 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.122518 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.122566 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" gracePeriod=600 Mar 12 17:14:44 crc kubenswrapper[4687]: E0312 17:14:44.243793 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.298532 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" exitCode=0 Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.298654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b"} Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.299023 4687 scope.go:117] "RemoveContainer" containerID="f1f8a6a188bd2ca7104da860801ac623dbfc4a90b30702bebae0059f6c759a97" Mar 12 17:14:44 crc kubenswrapper[4687]: I0312 17:14:44.300300 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:14:44 crc kubenswrapper[4687]: E0312 17:14:44.300843 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:14:54 crc kubenswrapper[4687]: I0312 17:14:54.733023 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:14:54 crc kubenswrapper[4687]: E0312 17:14:54.733902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.153758 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6"] Mar 12 17:15:00 crc kubenswrapper[4687]: E0312 17:15:00.154822 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f" containerName="oc" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.154853 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f" containerName="oc" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.155105 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f" containerName="oc" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.155994 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.159047 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.159662 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.178931 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6"] Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.210163 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/703247cf-b761-415e-8bd5-b2f825300fe4-config-volume\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.210496 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/703247cf-b761-415e-8bd5-b2f825300fe4-secret-volume\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.210576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf27x\" (UniqueName: \"kubernetes.io/projected/703247cf-b761-415e-8bd5-b2f825300fe4-kube-api-access-sf27x\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.312897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/703247cf-b761-415e-8bd5-b2f825300fe4-config-volume\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.313930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/703247cf-b761-415e-8bd5-b2f825300fe4-secret-volume\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.314051 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf27x\" (UniqueName: \"kubernetes.io/projected/703247cf-b761-415e-8bd5-b2f825300fe4-kube-api-access-sf27x\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.313966 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/703247cf-b761-415e-8bd5-b2f825300fe4-config-volume\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.329308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/703247cf-b761-415e-8bd5-b2f825300fe4-secret-volume\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.335981 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf27x\" (UniqueName: \"kubernetes.io/projected/703247cf-b761-415e-8bd5-b2f825300fe4-kube-api-access-sf27x\") pod \"collect-profiles-29555595-v75s6\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.489414 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:00 crc kubenswrapper[4687]: I0312 17:15:00.983950 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6"] Mar 12 17:15:01 crc kubenswrapper[4687]: I0312 17:15:01.493009 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" event={"ID":"703247cf-b761-415e-8bd5-b2f825300fe4","Type":"ContainerStarted","Data":"5cb98369bca89a98281021865e229c1b6e8207ef4dfa1bca0b060906225b6a2c"} Mar 12 17:15:01 crc kubenswrapper[4687]: I0312 17:15:01.493063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" event={"ID":"703247cf-b761-415e-8bd5-b2f825300fe4","Type":"ContainerStarted","Data":"c808008c6be4ba5e9c97f411a429c483d37f697e89ad7b3df055c326fb2990a5"} Mar 12 17:15:01 crc kubenswrapper[4687]: I0312 17:15:01.520381 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" podStartSLOduration=1.520334917 podStartE2EDuration="1.520334917s" podCreationTimestamp="2026-03-12 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 17:15:01.508912588 +0000 UTC m=+4350.472874952" watchObservedRunningTime="2026-03-12 17:15:01.520334917 +0000 UTC m=+4350.484297271" Mar 12 17:15:02 crc kubenswrapper[4687]: I0312 17:15:02.519029 4687 generic.go:334] "Generic (PLEG): container finished" podID="703247cf-b761-415e-8bd5-b2f825300fe4" containerID="5cb98369bca89a98281021865e229c1b6e8207ef4dfa1bca0b060906225b6a2c" exitCode=0 Mar 12 17:15:02 crc kubenswrapper[4687]: I0312 17:15:02.519092 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" event={"ID":"703247cf-b761-415e-8bd5-b2f825300fe4","Type":"ContainerDied","Data":"5cb98369bca89a98281021865e229c1b6e8207ef4dfa1bca0b060906225b6a2c"} Mar 12 17:15:03 crc kubenswrapper[4687]: I0312 17:15:03.980417 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.020575 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf27x\" (UniqueName: \"kubernetes.io/projected/703247cf-b761-415e-8bd5-b2f825300fe4-kube-api-access-sf27x\") pod \"703247cf-b761-415e-8bd5-b2f825300fe4\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.021121 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/703247cf-b761-415e-8bd5-b2f825300fe4-config-volume\") pod \"703247cf-b761-415e-8bd5-b2f825300fe4\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.021159 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/703247cf-b761-415e-8bd5-b2f825300fe4-secret-volume\") pod \"703247cf-b761-415e-8bd5-b2f825300fe4\" (UID: \"703247cf-b761-415e-8bd5-b2f825300fe4\") " Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.021987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/703247cf-b761-415e-8bd5-b2f825300fe4-config-volume" (OuterVolumeSpecName: "config-volume") pod "703247cf-b761-415e-8bd5-b2f825300fe4" (UID: "703247cf-b761-415e-8bd5-b2f825300fe4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.028812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703247cf-b761-415e-8bd5-b2f825300fe4-kube-api-access-sf27x" (OuterVolumeSpecName: "kube-api-access-sf27x") pod "703247cf-b761-415e-8bd5-b2f825300fe4" (UID: "703247cf-b761-415e-8bd5-b2f825300fe4"). InnerVolumeSpecName "kube-api-access-sf27x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.033479 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703247cf-b761-415e-8bd5-b2f825300fe4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "703247cf-b761-415e-8bd5-b2f825300fe4" (UID: "703247cf-b761-415e-8bd5-b2f825300fe4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.130749 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/703247cf-b761-415e-8bd5-b2f825300fe4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.130802 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/703247cf-b761-415e-8bd5-b2f825300fe4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.130814 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf27x\" (UniqueName: \"kubernetes.io/projected/703247cf-b761-415e-8bd5-b2f825300fe4-kube-api-access-sf27x\") on node \"crc\" DevicePath \"\"" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.549228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" event={"ID":"703247cf-b761-415e-8bd5-b2f825300fe4","Type":"ContainerDied","Data":"c808008c6be4ba5e9c97f411a429c483d37f697e89ad7b3df055c326fb2990a5"} Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.549267 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c808008c6be4ba5e9c97f411a429c483d37f697e89ad7b3df055c326fb2990a5" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.549343 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555595-v75s6" Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.584982 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm"] Mar 12 17:15:04 crc kubenswrapper[4687]: I0312 17:15:04.598101 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555550-9vgbm"] Mar 12 17:15:05 crc kubenswrapper[4687]: I0312 17:15:05.750760 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ca01e6-2aca-4d80-939e-3680dbf9f60f" path="/var/lib/kubelet/pods/60ca01e6-2aca-4d80-939e-3680dbf9f60f/volumes" Mar 12 17:15:08 crc kubenswrapper[4687]: I0312 17:15:08.733082 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:15:08 crc kubenswrapper[4687]: E0312 17:15:08.733796 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:15:20 crc kubenswrapper[4687]: I0312 17:15:20.733279 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:15:20 crc kubenswrapper[4687]: E0312 17:15:20.734200 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:15:31 crc kubenswrapper[4687]: I0312 17:15:31.750636 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:15:31 crc kubenswrapper[4687]: E0312 17:15:31.752402 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:15:36 crc kubenswrapper[4687]: I0312 17:15:36.433662 4687 scope.go:117] "RemoveContainer" containerID="c09855d7a8b13b9a11777b46341dbb2f58db9f6db7e8516c8acc994c8927a86e" Mar 12 17:15:46 crc kubenswrapper[4687]: I0312 17:15:46.733105 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:15:46 crc kubenswrapper[4687]: E0312 17:15:46.734677 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:15:58 crc kubenswrapper[4687]: I0312 17:15:58.733551 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:15:58 crc kubenswrapper[4687]: E0312 17:15:58.734352 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.145200 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555596-jp8g4"] Mar 12 17:16:00 crc kubenswrapper[4687]: E0312 17:16:00.146026 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703247cf-b761-415e-8bd5-b2f825300fe4" containerName="collect-profiles" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.146037 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="703247cf-b761-415e-8bd5-b2f825300fe4" containerName="collect-profiles" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.146303 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="703247cf-b761-415e-8bd5-b2f825300fe4" containerName="collect-profiles" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.147105 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.149402 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.149494 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.156823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555596-jp8g4"] Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.158467 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.257924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fxr\" (UniqueName: \"kubernetes.io/projected/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc-kube-api-access-86fxr\") pod \"auto-csr-approver-29555596-jp8g4\" (UID: \"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc\") " pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.361414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fxr\" (UniqueName: \"kubernetes.io/projected/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc-kube-api-access-86fxr\") pod \"auto-csr-approver-29555596-jp8g4\" (UID: \"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc\") " pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.561435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fxr\" (UniqueName: \"kubernetes.io/projected/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc-kube-api-access-86fxr\") pod \"auto-csr-approver-29555596-jp8g4\" (UID: \"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc\") " pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:00 crc kubenswrapper[4687]: I0312 17:16:00.767849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.128580 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nghfw"] Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.131324 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.154714 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nghfw"] Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.181150 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-catalog-content\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.181253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857pg\" (UniqueName: \"kubernetes.io/projected/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-kube-api-access-857pg\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.181657 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-utilities\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.264025 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555596-jp8g4"] Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.283565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-catalog-content\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.283682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-857pg\" (UniqueName: \"kubernetes.io/projected/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-kube-api-access-857pg\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.283910 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-utilities\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.284834 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-utilities\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.286824 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-catalog-content\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: W0312 17:16:01.292841 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec0ae2d_a221_4e48_b4ca_d4fc88b1aacc.slice/crio-a75793cab26d08f8cd457543dd98757ded21c502fb60531df2432f209ff43fca WatchSource:0}: Error finding container a75793cab26d08f8cd457543dd98757ded21c502fb60531df2432f209ff43fca: Status 404 returned error can't find the container with id a75793cab26d08f8cd457543dd98757ded21c502fb60531df2432f209ff43fca Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.313923 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-857pg\" (UniqueName: \"kubernetes.io/projected/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-kube-api-access-857pg\") pod \"redhat-marketplace-nghfw\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.461619 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:01 crc kubenswrapper[4687]: I0312 17:16:01.970657 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nghfw"] Mar 12 17:16:02 crc kubenswrapper[4687]: I0312 17:16:02.245351 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" event={"ID":"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc","Type":"ContainerStarted","Data":"a75793cab26d08f8cd457543dd98757ded21c502fb60531df2432f209ff43fca"} Mar 12 17:16:02 crc kubenswrapper[4687]: I0312 17:16:02.247831 4687 generic.go:334] "Generic (PLEG): container finished" podID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerID="08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4" exitCode=0 Mar 12 17:16:02 crc kubenswrapper[4687]: I0312 17:16:02.247892 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerDied","Data":"08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4"} Mar 12 17:16:02 crc kubenswrapper[4687]: I0312 17:16:02.247913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerStarted","Data":"b7ccdd24c555c992e2e2f6ded1171de7e296285a4841774bf311cf9d559a754a"} Mar 12 17:16:03 crc kubenswrapper[4687]: I0312 17:16:03.260473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" event={"ID":"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc","Type":"ContainerStarted","Data":"52abfbd1b6128a397043d2d54e80519ab9ad7623aa55ba4154e09791747f568d"} Mar 12 17:16:03 crc kubenswrapper[4687]: I0312 17:16:03.288980 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" podStartSLOduration=1.835413012 podStartE2EDuration="3.288962218s" podCreationTimestamp="2026-03-12 17:16:00 +0000 UTC" firstStartedPulling="2026-03-12 17:16:01.307305483 +0000 UTC m=+4410.271267827" lastFinishedPulling="2026-03-12 17:16:02.760854689 +0000 UTC m=+4411.724817033" observedRunningTime="2026-03-12 17:16:03.277642689 +0000 UTC m=+4412.241605053" watchObservedRunningTime="2026-03-12 17:16:03.288962218 +0000 UTC m=+4412.252924562" Mar 12 17:16:04 crc kubenswrapper[4687]: I0312 17:16:04.281838 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerStarted","Data":"000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648"} Mar 12 17:16:04 crc kubenswrapper[4687]: I0312 17:16:04.285666 4687 generic.go:334] "Generic (PLEG): container finished" podID="dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc" containerID="52abfbd1b6128a397043d2d54e80519ab9ad7623aa55ba4154e09791747f568d" exitCode=0 Mar 12 17:16:04 crc kubenswrapper[4687]: I0312 17:16:04.285711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" event={"ID":"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc","Type":"ContainerDied","Data":"52abfbd1b6128a397043d2d54e80519ab9ad7623aa55ba4154e09791747f568d"} Mar 12 17:16:05 crc kubenswrapper[4687]: I0312 17:16:05.302001 4687 generic.go:334] "Generic (PLEG): container finished" podID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerID="000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648" exitCode=0 Mar 12 17:16:05 crc kubenswrapper[4687]: I0312 17:16:05.302075 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerDied","Data":"000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648"} Mar 12 17:16:05 crc kubenswrapper[4687]: I0312 17:16:05.876343 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.012173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86fxr\" (UniqueName: \"kubernetes.io/projected/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc-kube-api-access-86fxr\") pod \"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc\" (UID: \"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc\") " Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.021976 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc-kube-api-access-86fxr" (OuterVolumeSpecName: "kube-api-access-86fxr") pod "dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc" (UID: "dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc"). InnerVolumeSpecName "kube-api-access-86fxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.116790 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86fxr\" (UniqueName: \"kubernetes.io/projected/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc-kube-api-access-86fxr\") on node \"crc\" DevicePath \"\"" Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.327237 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerStarted","Data":"9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b"} Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.332129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" event={"ID":"dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc","Type":"ContainerDied","Data":"a75793cab26d08f8cd457543dd98757ded21c502fb60531df2432f209ff43fca"} Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.332166 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75793cab26d08f8cd457543dd98757ded21c502fb60531df2432f209ff43fca" Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.332326 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555596-jp8g4" Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.350288 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555590-r4qwg"] Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.362601 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555590-r4qwg"] Mar 12 17:16:06 crc kubenswrapper[4687]: I0312 17:16:06.363209 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nghfw" podStartSLOduration=1.90666556 podStartE2EDuration="5.363195392s" podCreationTimestamp="2026-03-12 17:16:01 +0000 UTC" firstStartedPulling="2026-03-12 17:16:02.249879229 +0000 UTC m=+4411.213841573" lastFinishedPulling="2026-03-12 17:16:05.706409051 +0000 UTC m=+4414.670371405" observedRunningTime="2026-03-12 17:16:06.350160217 +0000 UTC m=+4415.314122561" watchObservedRunningTime="2026-03-12 17:16:06.363195392 +0000 UTC m=+4415.327157736" Mar 12 17:16:07 crc kubenswrapper[4687]: I0312 17:16:07.745554 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1078644-8717-4e2d-b681-5f3d0aee315d" path="/var/lib/kubelet/pods/b1078644-8717-4e2d-b681-5f3d0aee315d/volumes" Mar 12 17:16:10 crc kubenswrapper[4687]: I0312 17:16:10.733559 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:16:10 crc kubenswrapper[4687]: E0312 17:16:10.734443 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:16:11 crc kubenswrapper[4687]: I0312 17:16:11.462724 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:11 crc kubenswrapper[4687]: I0312 17:16:11.463204 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:11 crc kubenswrapper[4687]: I0312 17:16:11.508554 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:12 crc kubenswrapper[4687]: I0312 17:16:12.483708 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:12 crc kubenswrapper[4687]: I0312 17:16:12.543201 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nghfw"] Mar 12 17:16:14 crc kubenswrapper[4687]: I0312 17:16:14.429963 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nghfw" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="registry-server" containerID="cri-o://9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b" gracePeriod=2 Mar 12 17:16:14 crc kubenswrapper[4687]: I0312 17:16:14.970645 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.139256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-utilities\") pod \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.139414 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857pg\" (UniqueName: \"kubernetes.io/projected/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-kube-api-access-857pg\") pod \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.139532 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-catalog-content\") pod \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\" (UID: \"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0\") " Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.140093 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-utilities" (OuterVolumeSpecName: "utilities") pod "4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" (UID: "4262ef56-ecfe-4348-b5e1-9e456aa7e9e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.140665 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.145530 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-kube-api-access-857pg" (OuterVolumeSpecName: "kube-api-access-857pg") pod "4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" (UID: "4262ef56-ecfe-4348-b5e1-9e456aa7e9e0"). InnerVolumeSpecName "kube-api-access-857pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.166578 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" (UID: "4262ef56-ecfe-4348-b5e1-9e456aa7e9e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.242874 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857pg\" (UniqueName: \"kubernetes.io/projected/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-kube-api-access-857pg\") on node \"crc\" DevicePath \"\"" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.242953 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.443544 4687 generic.go:334] "Generic (PLEG): container finished" podID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerID="9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b" exitCode=0 Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.443589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerDied","Data":"9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b"} Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.443625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nghfw" event={"ID":"4262ef56-ecfe-4348-b5e1-9e456aa7e9e0","Type":"ContainerDied","Data":"b7ccdd24c555c992e2e2f6ded1171de7e296285a4841774bf311cf9d559a754a"} Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.443648 4687 scope.go:117] "RemoveContainer" containerID="9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.444446 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nghfw" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.469400 4687 scope.go:117] "RemoveContainer" containerID="000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.499730 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nghfw"] Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.508628 4687 scope.go:117] "RemoveContainer" containerID="08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.519267 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nghfw"] Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.572934 4687 scope.go:117] "RemoveContainer" containerID="9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b" Mar 12 17:16:15 crc kubenswrapper[4687]: E0312 17:16:15.574030 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b\": container with ID starting with 9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b not found: ID does not exist" containerID="9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.574078 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b"} err="failed to get container status \"9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b\": rpc error: code = NotFound desc = could not find container \"9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b\": container with ID starting with 9a291eb0d29eb1cb236145ed39c4652fee64ded169566024682bbda8edfed28b not found: ID does not exist" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.574115 4687 scope.go:117] "RemoveContainer" containerID="000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648" Mar 12 17:16:15 crc kubenswrapper[4687]: E0312 17:16:15.574461 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648\": container with ID starting with 000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648 not found: ID does not exist" containerID="000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.574489 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648"} err="failed to get container status \"000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648\": rpc error: code = NotFound desc = could not find container \"000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648\": container with ID starting with 000c041c7a94dc0adb3b318913daf10a14edcb2c8884a1ad25c107709dea4648 not found: ID does not exist" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.574507 4687 scope.go:117] "RemoveContainer" containerID="08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4" Mar 12 17:16:15 crc kubenswrapper[4687]: E0312 17:16:15.574961 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4\": container with ID starting with 08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4 not found: ID does not exist" containerID="08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.574992 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4"} err="failed to get container status \"08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4\": rpc error: code = NotFound desc = could not find container \"08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4\": container with ID starting with 08ba1ea3279976a6a8bc33cff68c25173d35d7dce292a423ed99e84e6d25a3a4 not found: ID does not exist" Mar 12 17:16:15 crc kubenswrapper[4687]: I0312 17:16:15.747897 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" path="/var/lib/kubelet/pods/4262ef56-ecfe-4348-b5e1-9e456aa7e9e0/volumes" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.514670 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 17:16:18 crc kubenswrapper[4687]: E0312 17:16:18.515820 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="extract-utilities" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.515840 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="extract-utilities" Mar 12 17:16:18 crc kubenswrapper[4687]: E0312 17:16:18.515891 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc" containerName="oc" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.515903 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc" containerName="oc" Mar 12 17:16:18 crc kubenswrapper[4687]: E0312 17:16:18.515923 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="extract-content" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.515931 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="extract-content" Mar 12 17:16:18 crc kubenswrapper[4687]: E0312 17:16:18.515952 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="registry-server" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.515959 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="registry-server" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.516213 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4262ef56-ecfe-4348-b5e1-9e456aa7e9e0" containerName="registry-server" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.516233 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc" containerName="oc" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.517207 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.520937 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.521193 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.521203 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.521904 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bvmm5" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.537975 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636643 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-config-data\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/401d4f9b-896e-4926-91ef-c90b5c38ef83-kube-api-access-qsw2p\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.636939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.637109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.739557 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.739717 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.739931 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740053 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-config-data\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/401d4f9b-896e-4926-91ef-c90b5c38ef83-kube-api-access-qsw2p\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740338 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740591 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.740667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.741065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.741288 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.741345 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.741388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-config-data\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.746401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.747761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.749434 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.771682 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/401d4f9b-896e-4926-91ef-c90b5c38ef83-kube-api-access-qsw2p\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.787952 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " pod="openstack/tempest-tests-tempest" Mar 12 17:16:18 crc kubenswrapper[4687]: I0312 17:16:18.835691 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 17:16:19 crc kubenswrapper[4687]: I0312 17:16:19.440441 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 17:16:19 crc kubenswrapper[4687]: I0312 17:16:19.493169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"401d4f9b-896e-4926-91ef-c90b5c38ef83","Type":"ContainerStarted","Data":"3049f2235a9bb52b55635db7eea09abdf0c0a4288a07ee7e0417996aba7dc70c"} Mar 12 17:16:25 crc kubenswrapper[4687]: I0312 17:16:25.734938 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:16:25 crc kubenswrapper[4687]: E0312 17:16:25.736594 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:16:36 crc kubenswrapper[4687]: I0312 17:16:36.532468 4687 scope.go:117] "RemoveContainer" containerID="ec538f15ab42cbbcd7f5a2f24621678907e0e51fa08b5a3b3eb9a007ba4d3f7e" Mar 12 17:16:38 crc kubenswrapper[4687]: I0312 17:16:38.735298 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:16:38 crc kubenswrapper[4687]: E0312 17:16:38.737241 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:16:53 crc kubenswrapper[4687]: I0312 17:16:53.732836 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:16:53 crc kubenswrapper[4687]: E0312 17:16:53.734016 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:16:57 crc kubenswrapper[4687]: E0312 17:16:57.089165 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 12 17:16:57 crc kubenswrapper[4687]: E0312 17:16:57.092588 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsw2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(401d4f9b-896e-4926-91ef-c90b5c38ef83): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 17:16:57 crc kubenswrapper[4687]: E0312 17:16:57.093705 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="401d4f9b-896e-4926-91ef-c90b5c38ef83" Mar 12 17:16:57 crc kubenswrapper[4687]: E0312 17:16:57.318612 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="401d4f9b-896e-4926-91ef-c90b5c38ef83" Mar 12 17:17:06 crc kubenswrapper[4687]: I0312 17:17:06.734275 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:17:06 crc kubenswrapper[4687]: E0312 17:17:06.735217 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:17:11 crc kubenswrapper[4687]: I0312 17:17:11.187908 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 17:17:13 crc kubenswrapper[4687]: I0312 17:17:13.485046 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"401d4f9b-896e-4926-91ef-c90b5c38ef83","Type":"ContainerStarted","Data":"5919815d56a401871d9c84447b563fbdaffc95d5c02dc6288d0653eca4294e21"} Mar 12 17:17:13 crc kubenswrapper[4687]: I0312 17:17:13.512214 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.772838826 podStartE2EDuration="56.512191247s" podCreationTimestamp="2026-03-12 17:16:17 +0000 UTC" firstStartedPulling="2026-03-12 17:16:19.446128721 +0000 UTC m=+4428.410091065" lastFinishedPulling="2026-03-12 17:17:11.185481142 +0000 UTC m=+4480.149443486" observedRunningTime="2026-03-12 17:17:13.500317433 +0000 UTC m=+4482.464279777" watchObservedRunningTime="2026-03-12 17:17:13.512191247 +0000 UTC m=+4482.476153591" Mar 12 17:17:20 crc kubenswrapper[4687]: I0312 17:17:20.734901 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:17:20 crc kubenswrapper[4687]: E0312 17:17:20.735761 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:17:35 crc kubenswrapper[4687]: I0312 17:17:35.737377 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:17:35 crc kubenswrapper[4687]: E0312 17:17:35.737990 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.732775 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:17:50 crc kubenswrapper[4687]: E0312 17:17:50.733572 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.841499 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6wwd"] Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.845923 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.885280 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6wwd"] Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.976783 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-catalog-content\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.976858 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5k87\" (UniqueName: \"kubernetes.io/projected/08357120-f2bb-4324-ade8-a019ed106514-kube-api-access-m5k87\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:50 crc kubenswrapper[4687]: I0312 17:17:50.977317 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-utilities\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.080064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-catalog-content\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.080128 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5k87\" (UniqueName: \"kubernetes.io/projected/08357120-f2bb-4324-ade8-a019ed106514-kube-api-access-m5k87\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.080248 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-utilities\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.080824 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-catalog-content\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.080914 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-utilities\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.171306 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5k87\" (UniqueName: \"kubernetes.io/projected/08357120-f2bb-4324-ade8-a019ed106514-kube-api-access-m5k87\") pod \"certified-operators-w6wwd\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:51 crc kubenswrapper[4687]: I0312 17:17:51.179660 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:17:52 crc kubenswrapper[4687]: I0312 17:17:52.781969 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6wwd"] Mar 12 17:17:54 crc kubenswrapper[4687]: I0312 17:17:54.306472 4687 generic.go:334] "Generic (PLEG): container finished" podID="08357120-f2bb-4324-ade8-a019ed106514" containerID="35f2641904d8d4bae7e6ff91445ee8f048139a0277f3b6bc7e6407490a8908a3" exitCode=0 Mar 12 17:17:54 crc kubenswrapper[4687]: I0312 17:17:54.306556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerDied","Data":"35f2641904d8d4bae7e6ff91445ee8f048139a0277f3b6bc7e6407490a8908a3"} Mar 12 17:17:54 crc kubenswrapper[4687]: I0312 17:17:54.306940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerStarted","Data":"589c2da66497ddd830247b0bf75cb14ddeada09afc806c3898d1e28f173db42c"} Mar 12 17:17:56 crc kubenswrapper[4687]: I0312 17:17:56.336817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerStarted","Data":"2549d2c5bb0075907d7a6e9c762e09c1e971cf34a2319dacdf8b757ec248926f"} Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.395642 4687 generic.go:334] "Generic (PLEG): container finished" podID="08357120-f2bb-4324-ade8-a019ed106514" containerID="2549d2c5bb0075907d7a6e9c762e09c1e971cf34a2319dacdf8b757ec248926f" exitCode=0 Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.395785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerDied","Data":"2549d2c5bb0075907d7a6e9c762e09c1e971cf34a2319dacdf8b757ec248926f"} Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.398875 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555598-w5bf7"] Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.403411 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.429776 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.429781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.429785 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.433923 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555598-w5bf7"] Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.521225 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdjt\" (UniqueName: \"kubernetes.io/projected/51edd34d-2c01-4595-ae73-242d69a16c19-kube-api-access-rhdjt\") pod \"auto-csr-approver-29555598-w5bf7\" (UID: \"51edd34d-2c01-4595-ae73-242d69a16c19\") " pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.623486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdjt\" (UniqueName: \"kubernetes.io/projected/51edd34d-2c01-4595-ae73-242d69a16c19-kube-api-access-rhdjt\") pod \"auto-csr-approver-29555598-w5bf7\" (UID: \"51edd34d-2c01-4595-ae73-242d69a16c19\") " pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.664271 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdjt\" (UniqueName: \"kubernetes.io/projected/51edd34d-2c01-4595-ae73-242d69a16c19-kube-api-access-rhdjt\") pod \"auto-csr-approver-29555598-w5bf7\" (UID: \"51edd34d-2c01-4595-ae73-242d69a16c19\") " pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.760696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.863565 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9g44w"] Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.867892 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:00 crc kubenswrapper[4687]: I0312 17:18:00.889174 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9g44w"] Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.034054 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-utilities\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.034119 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7st5\" (UniqueName: \"kubernetes.io/projected/171a712a-f2d7-4b3b-8d7a-c413b045e54c-kube-api-access-x7st5\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.034266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-catalog-content\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.136077 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-catalog-content\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.136224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-utilities\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.136269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7st5\" (UniqueName: \"kubernetes.io/projected/171a712a-f2d7-4b3b-8d7a-c413b045e54c-kube-api-access-x7st5\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.139308 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-catalog-content\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.139341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-utilities\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.168477 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7st5\" (UniqueName: \"kubernetes.io/projected/171a712a-f2d7-4b3b-8d7a-c413b045e54c-kube-api-access-x7st5\") pod \"redhat-operators-9g44w\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:01 crc kubenswrapper[4687]: I0312 17:18:01.276155 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:18:02 crc kubenswrapper[4687]: I0312 17:18:02.001036 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555598-w5bf7"] Mar 12 17:18:02 crc kubenswrapper[4687]: W0312 17:18:02.030580 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51edd34d_2c01_4595_ae73_242d69a16c19.slice/crio-01f547bac8899084afcba268df67da963c84b982346081a133b4d5842874fab8 WatchSource:0}: Error finding container 01f547bac8899084afcba268df67da963c84b982346081a133b4d5842874fab8: Status 404 returned error can't find the container with id 01f547bac8899084afcba268df67da963c84b982346081a133b4d5842874fab8 Mar 12 17:18:02 crc kubenswrapper[4687]: I0312 17:18:02.141116 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9g44w"] Mar 12 17:18:02 crc kubenswrapper[4687]: W0312 17:18:02.151475 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod171a712a_f2d7_4b3b_8d7a_c413b045e54c.slice/crio-e950631412c712d215aefadf7b17151271474848d56ce3df382352b0ab23fc30 WatchSource:0}: Error finding container e950631412c712d215aefadf7b17151271474848d56ce3df382352b0ab23fc30: Status 404 returned error can't find the container with id e950631412c712d215aefadf7b17151271474848d56ce3df382352b0ab23fc30 Mar 12 17:18:02 crc kubenswrapper[4687]: I0312 17:18:02.427528 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerStarted","Data":"d657b183fae01432ff86dd2f57d88fb3f40d952a7f853039c531bbe8a5d0cc6b"} Mar 12 17:18:02 crc kubenswrapper[4687]: I0312 17:18:02.429293 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" event={"ID":"51edd34d-2c01-4595-ae73-242d69a16c19","Type":"ContainerStarted","Data":"01f547bac8899084afcba268df67da963c84b982346081a133b4d5842874fab8"} Mar 12 17:18:02 crc kubenswrapper[4687]: I0312 17:18:02.430793 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerStarted","Data":"e950631412c712d215aefadf7b17151271474848d56ce3df382352b0ab23fc30"} Mar 12 17:18:02 crc kubenswrapper[4687]: I0312 17:18:02.470155 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6wwd" podStartSLOduration=5.883902418 podStartE2EDuration="12.468458513s" podCreationTimestamp="2026-03-12 17:17:50 +0000 UTC" firstStartedPulling="2026-03-12 17:17:54.30941121 +0000 UTC m=+4523.273373564" lastFinishedPulling="2026-03-12 17:18:00.893967315 +0000 UTC m=+4529.857929659" observedRunningTime="2026-03-12 17:18:02.459060206 +0000 UTC m=+4531.423022550" watchObservedRunningTime="2026-03-12 17:18:02.468458513 +0000 UTC m=+4531.432420857" Mar 12 17:18:03 crc kubenswrapper[4687]: I0312 17:18:03.450112 4687 generic.go:334] "Generic (PLEG): container finished" podID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerID="f9dbbdf30bfd22423640d24dd9dedf5b0c9b20eb4d228f2dc22120ef9cb013be" exitCode=0 Mar 12 17:18:03 crc kubenswrapper[4687]: I0312 17:18:03.450380 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerDied","Data":"f9dbbdf30bfd22423640d24dd9dedf5b0c9b20eb4d228f2dc22120ef9cb013be"} Mar 12 17:18:04 crc kubenswrapper[4687]: I0312 17:18:04.815416 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:18:04 crc kubenswrapper[4687]: E0312 17:18:04.819022 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:18:05 crc kubenswrapper[4687]: I0312 17:18:05.176582 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:05 crc kubenswrapper[4687]: I0312 17:18:05.176598 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:05 crc kubenswrapper[4687]: I0312 17:18:05.830048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" event={"ID":"51edd34d-2c01-4595-ae73-242d69a16c19","Type":"ContainerStarted","Data":"eb7d23cfea9f33dd11a304601ad37afb86021dedc3f59324bc28165388898bcb"} Mar 12 17:18:05 crc kubenswrapper[4687]: I0312 17:18:05.834817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerStarted","Data":"9f338c064484e7fe00f8ae2266ee1d3ba072289aae7c6a57467f1cbe9882afca"} Mar 12 17:18:05 crc kubenswrapper[4687]: I0312 17:18:05.874552 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" podStartSLOduration=4.652223866 podStartE2EDuration="5.874530238s" podCreationTimestamp="2026-03-12 17:18:00 +0000 UTC" firstStartedPulling="2026-03-12 17:18:02.037862006 +0000 UTC m=+4531.001824350" lastFinishedPulling="2026-03-12 17:18:03.260168378 +0000 UTC m=+4532.224130722" observedRunningTime="2026-03-12 17:18:05.872746168 +0000 UTC m=+4534.836708512" watchObservedRunningTime="2026-03-12 17:18:05.874530238 +0000 UTC m=+4534.838492582" Mar 12 17:18:09 crc kubenswrapper[4687]: I0312 17:18:09.882981 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" event={"ID":"51edd34d-2c01-4595-ae73-242d69a16c19","Type":"ContainerDied","Data":"eb7d23cfea9f33dd11a304601ad37afb86021dedc3f59324bc28165388898bcb"} Mar 12 17:18:09 crc kubenswrapper[4687]: I0312 17:18:09.884452 4687 generic.go:334] "Generic (PLEG): container finished" podID="51edd34d-2c01-4595-ae73-242d69a16c19" containerID="eb7d23cfea9f33dd11a304601ad37afb86021dedc3f59324bc28165388898bcb" exitCode=0 Mar 12 17:18:11 crc kubenswrapper[4687]: I0312 17:18:11.182963 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:18:11 crc kubenswrapper[4687]: I0312 17:18:11.183269 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.121600 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.121595 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.124624 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.124684 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.224844 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.225350 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.441659 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.441725 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.441747 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.441781 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.441804 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.441858 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.442160 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.442272 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.553772 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" podUID="0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.640603 4687 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-5hwpl container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.640679 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" podUID="5046d5fc-693f-47bc-bae2-c3430c7e6b24" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.696841 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" podUID="f46caff8-15ce-49be-97d0-08e60d937972" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.37:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.832623 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:12 crc kubenswrapper[4687]: I0312 17:18:12.832869 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.063763 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-8k59p container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.063862 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podUID="33969389-2dd2-4c4b-ae70-d6e71f0fdf14" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.154993 4687 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-6zsz9 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.155063 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podUID="bc2003f0-4f8e-4e59-8a1a-dd7be452b232" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.260850 4687 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-bxc6s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.260956 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" podUID="ac17b136-46f1-4129-a22f-bcd3baaf7813" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.554583 4687 patch_prober.go:28] interesting pod/loki-operator-controller-manager-56bfd9f789-6lvcv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.554654 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podUID="4568a6b6-c008-4ea7-abec-b824324732d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.670196 4687 trace.go:236] Trace[145551189]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-1" (12-Mar-2026 17:18:11.136) (total time: 2521ms): Mar 12 17:18:13 crc kubenswrapper[4687]: Trace[145551189]: [2.521107153s] [2.521107153s] END Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.670203 4687 trace.go:236] Trace[1750851335]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (12-Mar-2026 17:18:11.620) (total time: 2037ms): Mar 12 17:18:13 crc kubenswrapper[4687]: Trace[1750851335]: [2.0371756s] [2.0371756s] END Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.680643 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.680777 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.680804 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.764292 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.764990 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.846603 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.846658 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.846739 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:13 crc kubenswrapper[4687]: I0312 17:18:13.846789 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.070288 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:14 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:14 crc kubenswrapper[4687]: > Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.231747 4687 patch_prober.go:28] interesting pod/metrics-server-5955fd9895-8btf6 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.231808 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podUID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.232060 4687 patch_prober.go:28] interesting pod/metrics-server-5955fd9895-8btf6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.232079 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podUID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.424792 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.595618 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.595659 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.595708 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.595656 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.596329 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.637587 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podUID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.711538 4687 patch_prober.go:28] interesting pod/monitoring-plugin-dd8c9f9fd-2l56l container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.711592 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" podUID="7e3a243a-64b3-42a8-aa54-63ac6ba629c2" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.711549 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.875548 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" podUID="39ab069c-1ccd-4ad4-b4ea-b71b1b09472f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.959553 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" podUID="15c585dd-9efa-430b-aeb5-42eaeace0d18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.959582 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.959548 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.959644 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.981321 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:14 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:14 crc kubenswrapper[4687]: > Mar 12 17:18:14 crc kubenswrapper[4687]: I0312 17:18:14.985648 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:14 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:14 crc kubenswrapper[4687]: > Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.043543 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.134581 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.365639 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.365747 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podUID="65afd209-a452-442f-853d-d2e062fa2530" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.407544 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.449555 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.534775 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.575536 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.575535 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.806484 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.806538 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.806594 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.806544 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.969955 4687 patch_prober.go:28] interesting pod/console-6c7b658b6f-wgtnl container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:15 crc kubenswrapper[4687]: I0312 17:18:15.970287 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6c7b658b6f-wgtnl" podUID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.232016 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.232087 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.233116 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.233172 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.506631 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podUID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.506738 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podUID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.575554 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.575631 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.919584 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podUID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:16 crc kubenswrapper[4687]: I0312 17:18:16.919613 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podUID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.238933 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.239289 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.239207 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.239386 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.465604 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podUID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.465725 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podUID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.761020 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:17 crc kubenswrapper[4687]: I0312 17:18:17.761454 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.278930 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8083/ready\": context deadline exceeded" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.278953 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8081/ready\": context deadline exceeded" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.279841 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.75:8081/ready\": context deadline exceeded" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.279729 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.75:8083/ready\": context deadline exceeded" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.573674 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.573683 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.573827 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.573753 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.677862 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.677905 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.677919 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.677955 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.741039 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.742526 4687 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-z25z9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.742572 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" podUID="98e16c84-ec9c-482a-8962-ce13556ffd74" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:18 crc kubenswrapper[4687]: E0312 17:18:18.745671 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.765581 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.797392 4687 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-jp88k container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:18 crc kubenswrapper[4687]: I0312 17:18:18.797490 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" podUID="97a0494c-2509-4e76-afd9-fd2be9482d5d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.397883 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.398413 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.397941 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.398550 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.479248 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.479273 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.479309 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.479321 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.761776 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.762532 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.809455 4687 patch_prober.go:28] interesting pod/thanos-querier-5d45cd5b67-f9k6z container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:20 crc kubenswrapper[4687]: I0312 17:18:20.809496 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" podUID="d33075ef-9184-4b45-9272-360a19902c6e" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.212034 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.212093 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.212515 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.212556 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.264334 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.264405 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.264660 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.264690 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.422542 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.422595 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.422643 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.422655 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.840287 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": context deadline exceeded" Mar 12 17:18:21 crc kubenswrapper[4687]: I0312 17:18:21.840341 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.048378 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nz548 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.048450 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" podUID="54b72843-9c3b-48ea-b74c-5a8b0872e66d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.048657 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nz548 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.048754 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" podUID="54b72843-9c3b-48ea-b74c-5a8b0872e66d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.118523 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.118545 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.118588 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.118597 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.155028 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440585 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440671 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440712 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440757 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440760 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440787 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440816 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.440772 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.555566 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" podUID="0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.600156 4687 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-5hwpl container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.600255 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" podUID="5046d5fc-693f-47bc-bae2-c3430c7e6b24" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.695583 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" podUID="f46caff8-15ce-49be-97d0-08e60d937972" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.37:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.779594 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.784470 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.784615 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.784658 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.876459 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:22 crc kubenswrapper[4687]: > Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.891545 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:22 crc kubenswrapper[4687]: I0312 17:18:22.891547 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.063349 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-8k59p container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.063490 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podUID="33969389-2dd2-4c4b-ae70-d6e71f0fdf14" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.154286 4687 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-6zsz9 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.154391 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podUID="bc2003f0-4f8e-4e59-8a1a-dd7be452b232" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.261441 4687 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-bxc6s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.261736 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" podUID="ac17b136-46f1-4129-a22f-bcd3baaf7813" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.303069 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.303118 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.303149 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.75:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.303196 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.369526 4687 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fdbl6 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.369593 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" podUID="9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.369928 4687 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fdbl6 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.369958 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" podUID="9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.630216 4687 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-52fkb container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.630278 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" podUID="6a654f3a-4b2a-408c-87c1-908616a79eb3" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.646491 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.646497 4687 patch_prober.go:28] interesting pod/loki-operator-controller-manager-56bfd9f789-6lvcv container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.1.70:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.646540 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.646550 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podUID="4568a6b6-c008-4ea7-abec-b824324732d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.1.70:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.678431 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.678491 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.770487 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.770826 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.770888 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.771494 4687 patch_prober.go:28] interesting pod/loki-operator-controller-manager-56bfd9f789-6lvcv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.771515 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podUID="4568a6b6-c008-4ea7-abec-b824324732d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.775613 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.856583 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.856987 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.857247 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.939547 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.939599 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.939865 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.940410 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:23 crc kubenswrapper[4687]: I0312 17:18:23.940433 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.063407 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-8k59p container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.71:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.063492 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podUID="33969389-2dd2-4c4b-ae70-d6e71f0fdf14" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.1.71:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.184517 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="550e3dcc-162b-4c82-8a8f-81e03e689772" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.184609 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="550e3dcc-162b-4c82-8a8f-81e03e689772" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.218506 4687 patch_prober.go:28] interesting pod/metrics-server-5955fd9895-8btf6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.218540 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" podUID="89dd944c-557b-4060-914f-c5287ed954bb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.218562 4687 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-6zsz9 container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.72:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.218625 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podUID="bc2003f0-4f8e-4e59-8a1a-dd7be452b232" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.1.72:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.218667 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podUID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.218752 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" podUID="89dd944c-557b-4060-914f-c5287ed954bb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.260643 4687 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-bxc6s container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.260769 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" podUID="ac17b136-46f1-4129-a22f-bcd3baaf7813" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.464634 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.464666 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.546631 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" podUID="af6289c5-2a9a-4429-96d6-3c7bbff706e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.628587 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" podUID="63f537cc-6a26-4a05-9b17-80549297e9f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.677824 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.74:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.677892 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.74:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.710606 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" podUID="af6289c5-2a9a-4429-96d6-3c7bbff706e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.710683 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.793276 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-pjvcq" podUID="40b2acd2-7fab-41ca-9ba5-7f8a5dc50606" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.793590 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podUID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.793628 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" podUID="63f537cc-6a26-4a05-9b17-80549297e9f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.793817 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.876587 4687 patch_prober.go:28] interesting pod/monitoring-plugin-dd8c9f9fd-2l56l container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.876669 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" podUID="7e3a243a-64b3-42a8-aa54-63ac6ba629c2" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:24 crc kubenswrapper[4687]: I0312 17:18:24.876675 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.042555 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podUID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.042623 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podUID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.042739 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.78:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.042789 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.1.78:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.043037 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.124592 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" podUID="39ab069c-1ccd-4ad4-b4ea-b71b1b09472f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.289565 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" podUID="79d0c51f-999a-4e39-b6b5-aecf10472a4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.289632 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" podUID="15c585dd-9efa-430b-aeb5-42eaeace0d18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.289644 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.289969 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.289710 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" podUID="79d0c51f-999a-4e39-b6b5-aecf10472a4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.289990 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podUID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.371716 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.454687 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" podUID="39ab069c-1ccd-4ad4-b4ea-b71b1b09472f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.454714 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podUID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.536705 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.537031 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.537112 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.537284 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" podUID="15c585dd-9efa-430b-aeb5-42eaeace0d18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.702578 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podUID="65afd209-a452-442f-853d-d2e062fa2530" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.702577 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.784612 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.784753 4687 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.79:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.785031 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="f017a70e-cb13-441a-a70c-0809569c1c52" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.1.79:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.867559 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.867598 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podUID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.867672 4687 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.80:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.867702 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="f56100a9-2dec-4a46-a619-6922b78f7e16" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.1.80:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.908550 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.949513 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.949691 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.78:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:25 crc kubenswrapper[4687]: I0312 17:18:25.949717 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.1.78:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.032831 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033029 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033076 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033264 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podUID="65afd209-a452-442f-853d-d2e062fa2530" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033278 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033286 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033317 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033420 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033444 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033472 4687 patch_prober.go:28] interesting pod/thanos-querier-5d45cd5b67-f9k6z container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033516 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" podUID="d33075ef-9184-4b45-9272-360a19902c6e" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033557 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033699 4687 patch_prober.go:28] interesting pod/console-6c7b658b6f-wgtnl container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033718 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6c7b658b6f-wgtnl" podUID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033779 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.033812 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.034389 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.231450 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.231703 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.231795 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.231813 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.365264 4687 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.79:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.365328 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="f017a70e-cb13-441a-a70c-0809569c1c52" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.1.79:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.428129 4687 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.80:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.428200 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="f56100a9-2dec-4a46-a619-6922b78f7e16" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.1.80:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.463600 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podUID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.480000 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.480047 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.480085 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.480075 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.480311 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.480439 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.487565 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"7ccb120aeebfe27d31cad2d606816a1479f4c9372a9527cc60301cc3f2f78534"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.492658 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" containerID="cri-o://7ccb120aeebfe27d31cad2d606816a1479f4c9372a9527cc60301cc3f2f78534" gracePeriod=30 Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.576898 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.577025 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.772159 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.775190 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.878669 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.878670 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podUID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:26 crc kubenswrapper[4687]: I0312 17:18:26.878722 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.078601 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" podUID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.238874 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.238903 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.238941 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.238955 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.423601 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podUID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.810510 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.810631 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:27 crc kubenswrapper[4687]: I0312 17:18:27.822743 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.279442 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.279511 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.280180 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.75:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.280121 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.573666 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.573717 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.573756 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.573773 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.680436 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.680468 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.680522 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.680515 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.765097 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.765176 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.765301 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.781145 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"2aa27c071ee14d55ab24cea39beeae927da8dff3e369144a2731dff4c1cfbb59"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.782616 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" containerID="cri-o://2aa27c071ee14d55ab24cea39beeae927da8dff3e369144a2731dff4c1cfbb59" gracePeriod=30 Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.785775 4687 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-z25z9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.785782 4687 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-z25z9 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.785818 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" podUID="98e16c84-ec9c-482a-8962-ce13556ffd74" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.785830 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-z25z9" podUID="98e16c84-ec9c-482a-8962-ce13556ffd74" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.797011 4687 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-jp88k container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:28 crc kubenswrapper[4687]: I0312 17:18:28.797073 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" podUID="97a0494c-2509-4e76-afd9-fd2be9482d5d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.185165 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g4lgw container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.185223 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" podUID="e8e442fd-56a6-49e5-b34d-86331dab75f4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.185708 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g4lgw container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.185732 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" podUID="e8e442fd-56a6-49e5-b34d-86331dab75f4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.480332 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.480442 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.819918 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-pjvcq" podUID="40b2acd2-7fab-41ca-9ba5-7f8a5dc50606" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.819937 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-g9hdt" podUID="50316ae5-82e3-4dbc-ba50-dd2046abc0e1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:29 crc kubenswrapper[4687]: I0312 17:18:29.820068 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g9hdt" podUID="50316ae5-82e3-4dbc-ba50-dd2046abc0e1" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:29 crc kubenswrapper[4687]: E0312 17:18:29.821331 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.399685 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.399766 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.399812 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.399825 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.738139 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:18:30 crc kubenswrapper[4687]: E0312 17:18:30.744526 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.763729 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" podUID="94a076eb-c02b-467b-bdc2-7a33ba3ec8a1" containerName="sbdb" probeResult="failure" output="command timed out" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.764388 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.770540 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-dmvhk" podUID="94a076eb-c02b-467b-bdc2-7a33ba3ec8a1" containerName="nbdb" probeResult="failure" output="command timed out" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832667 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832707 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832919 4687 patch_prober.go:28] interesting pod/thanos-querier-5d45cd5b67-f9k6z container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.89:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832968 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" podUID="d33075ef-9184-4b45-9272-360a19902c6e" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.89:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832948 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.833012 4687 patch_prober.go:28] interesting pod/thanos-querier-5d45cd5b67-f9k6z container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.833046 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" podUID="d33075ef-9184-4b45-9272-360a19902c6e" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832754 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832746 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.833070 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.832673 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:30 crc kubenswrapper[4687]: I0312 17:18:30.833254 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.213746 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.214109 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.213774 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.214172 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.238419 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.238478 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.286599 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.286665 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.422596 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.422672 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.422925 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.422958 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.479452 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.479508 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.761994 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.763161 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.840759 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.840879 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:31 crc kubenswrapper[4687]: I0312 17:18:31.841042 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.047865 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nz548 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.047964 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" podUID="54b72843-9c3b-48ea-b74c-5a8b0872e66d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.047886 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-nz548 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.048032 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nz548" podUID="54b72843-9c3b-48ea-b74c-5a8b0872e66d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.112723 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.112795 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.117409 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.117451 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.117486 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.117981 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.118060 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.118149 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.119886 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"cf9596c852cd6aaa695559bf7df46289adc3a7a5797d706c4aad529f14f76a59"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.120608 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" containerID="cri-o://cf9596c852cd6aaa695559bf7df46289adc3a7a5797d706c4aad529f14f76a59" gracePeriod=30 Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.265536 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podUID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.265576 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.265686 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.265954 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440607 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440671 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440724 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440786 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440848 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440856 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440874 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440884 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440898 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440908 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.440986 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.441016 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.441795 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"b56384eb952f96a537f2ae1da600aba3a28f63a5470f329822f66cf329002e86"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.441847 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" containerID="cri-o://b56384eb952f96a537f2ae1da600aba3a28f63a5470f329822f66cf329002e86" gracePeriod=30 Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.442583 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"1d48dce0f8ab7cbc5dadae8209ab1312fd6aa0ddd107c12a9e4236abfbf451e3"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.442630 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" containerID="cri-o://1d48dce0f8ab7cbc5dadae8209ab1312fd6aa0ddd107c12a9e4236abfbf451e3" gracePeriod=30 Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.599080 4687 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-5hwpl container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.599139 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" podUID="5046d5fc-693f-47bc-bae2-c3430c7e6b24" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.599180 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.600689 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"2b22319cb9c7470250fb1e55c8c0bbae56ff74c821fe846297bfda68bc1e516a"} pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.600727 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" podUID="5046d5fc-693f-47bc-bae2-c3430c7e6b24" containerName="authentication-operator" containerID="cri-o://2b22319cb9c7470250fb1e55c8c0bbae56ff74c821fe846297bfda68bc1e516a" gracePeriod=30 Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.651587 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.651656 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.695606 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-4dcdp" podUID="f46caff8-15ce-49be-97d0-08e60d937972" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.37:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.762946 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.763169 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.765027 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.767641 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.767725 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.832580 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.832750 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.832814 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.833102 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.834467 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"93ec49e0b993f5e4105fdb7cf5b2578b54c63020aedf051f1ad7f6a94717e934"} pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 12 17:18:32 crc kubenswrapper[4687]: I0312 17:18:32.834505 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" podUID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerName="webhook-server" containerID="cri-o://93ec49e0b993f5e4105fdb7cf5b2578b54c63020aedf051f1ad7f6a94717e934" gracePeriod=2 Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.063134 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-8k59p container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.063198 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podUID="33969389-2dd2-4c4b-ae70-d6e71f0fdf14" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.063276 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.153605 4687 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-6zsz9 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.153675 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podUID="bc2003f0-4f8e-4e59-8a1a-dd7be452b232" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.153763 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.164384 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.164439 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.260327 4687 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-bxc6s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.260420 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" podUID="ac17b136-46f1-4129-a22f-bcd3baaf7813" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.260505 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.280752 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.280833 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.75:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.307619 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.307626 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.307684 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.368546 4687 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fdbl6 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.368624 4687 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-fdbl6 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.368670 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" podUID="9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.368710 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-fdbl6" podUID="9d44901b-4ca1-4c1e-a1ce-14a8f3ba1b8c" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.76:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.442034 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.442862 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.554783 4687 patch_prober.go:28] interesting pod/loki-operator-controller-manager-56bfd9f789-6lvcv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.555233 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podUID="4568a6b6-c008-4ea7-abec-b824324732d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.563893 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.631092 4687 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-52fkb container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.631164 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" podUID="6a654f3a-4b2a-408c-87c1-908616a79eb3" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.678161 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.678233 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.683497 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.683576 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.683647 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.683723 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.683781 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.683809 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.685289 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"5d2e02721609584bc7882266cb5d8d1b70ab8951a712b80dde05ac9f08cf07ea"} pod="metallb-system/frr-k8s-4xd8n" containerMessage="Container controller failed liveness probe, will be restarted" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.685324 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"cea997feac1bbb1cbf2f45f0fee666bc0a3a3c04a749e0d26bb1262825aabd12"} pod="metallb-system/frr-k8s-4xd8n" containerMessage="Container frr failed liveness probe, will be restarted" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.685431 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" containerID="cri-o://5d2e02721609584bc7882266cb5d8d1b70ab8951a712b80dde05ac9f08cf07ea" gracePeriod=2 Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.763661 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.765763 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-lbgkx" podUID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.765980 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-lbgkx" podUID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.767562 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.767571 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.767640 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.768107 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.768886 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.768918 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.772013 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"3f496599538811d3982293ecbd6894138a2b44720f612fd4684a48035a4468fc"} pod="metallb-system/controller-7bb4cc7c98-hq4lb" containerMessage="Container controller failed liveness probe, will be restarted" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.772097 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" containerID="cri-o://3f496599538811d3982293ecbd6894138a2b44720f612fd4684a48035a4468fc" gracePeriod=2 Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.850551 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.850654 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.850715 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.850766 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.850814 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.850887 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.857297 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"2194eae52541904551456546389f27d22048c7a40614c3e06c18fe979e6a2432"} pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Mar 12 17:18:33 crc kubenswrapper[4687]: I0312 17:18:33.857384 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" containerID="cri-o://2194eae52541904551456546389f27d22048c7a40614c3e06c18fe979e6a2432" gracePeriod=30 Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.064911 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-8k59p container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.064987 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podUID="33969389-2dd2-4c4b-ae70-d6e71f0fdf14" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.214695 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" podUID="89dd944c-557b-4060-914f-c5287ed954bb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.214940 4687 patch_prober.go:28] interesting pod/metrics-server-5955fd9895-8btf6 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215010 4687 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-6zsz9 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215019 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podUID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215067 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podUID="bc2003f0-4f8e-4e59-8a1a-dd7be452b232" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215173 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qgv9f" podUID="89dd944c-557b-4060-914f-c5287ed954bb" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215264 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="550e3dcc-162b-4c82-8a8f-81e03e689772" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215336 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="550e3dcc-162b-4c82-8a8f-81e03e689772" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.14:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215819 4687 patch_prober.go:28] interesting pod/metrics-server-5955fd9895-8btf6 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215859 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podUID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.91:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.215903 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.217572 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"90fb3737b76133d7293c829b2e35edef4ace11e5b9f9e88e9723dc1cd643ef71"} pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" containerMessage="Container metrics-server failed liveness probe, will be restarted" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.217621 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" podUID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerName="metrics-server" containerID="cri-o://90fb3737b76133d7293c829b2e35edef4ace11e5b9f9e88e9723dc1cd643ef71" gracePeriod=170 Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.264038 4687 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-bxc6s container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.264096 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" podUID="ac17b136-46f1-4129-a22f-bcd3baaf7813" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.1.73:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.279903 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.75:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.279958 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.75:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.280161 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.1.75:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.280250 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.75:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.425578 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.425681 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.467746 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" podUID="af6289c5-2a9a-4429-96d6-3c7bbff706e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.509548 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" podUID="63f537cc-6a26-4a05-9b17-80549297e9f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.550633 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.550739 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.557173 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.567611 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.567714 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.657634 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podUID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.657680 4687 patch_prober.go:28] interesting pod/loki-operator-controller-manager-56bfd9f789-6lvcv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.657976 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.657969 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podUID="4568a6b6-c008-4ea7-abec-b824324732d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.1.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.711610 4687 patch_prober.go:28] interesting pod/monitoring-plugin-dd8c9f9fd-2l56l container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.711682 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" podUID="7e3a243a-64b3-42a8-aa54-63ac6ba629c2" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.711758 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.763420 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-pjvcq" podUID="40b2acd2-7fab-41ca-9ba5-7f8a5dc50606" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.763593 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.836573 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.836600 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podUID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.836670 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.78:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.836699 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.836758 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.1.78:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.836939 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.841793 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.877590 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" podUID="79d0c51f-999a-4e39-b6b5-aecf10472a4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.918578 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.918634 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-hq4lb" podUID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.99:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.918665 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.928891 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.928938 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.928970 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.931290 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"6e1c4361fcbf38a2eed0fcf8d9de9c162cd6c7c7d295f05bd4df892bbc584ad9"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 12 17:18:34 crc kubenswrapper[4687]: I0312 17:18:34.931404 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://6e1c4361fcbf38a2eed0fcf8d9de9c162cd6c7c7d295f05bd4df892bbc584ad9" gracePeriod=30 Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.009608 4687 patch_prober.go:28] interesting pod/console-6c7b658b6f-wgtnl container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.009681 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-6c7b658b6f-wgtnl" podUID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.009732 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.011078 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"5cd606511b68f2e1e6cf249862a9f03a5baf74b6810394414fc5431f837a1c6c"} pod="openshift-console/console-6c7b658b6f-wgtnl" containerMessage="Container console failed liveness probe, will be restarted" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.051619 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.051741 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.107609 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podUID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.371955 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" podUID="65afd209-a452-442f-853d-d2e062fa2530" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.372453 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.372078 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podUID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.412555 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.412671 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.413008 4687 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.79:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.413063 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="f017a70e-cb13-441a-a70c-0809569c1c52" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.1.79:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.413151 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.413539 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.454683 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.454785 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.495610 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.495665 4687 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.80:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.495715 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="f56100a9-2dec-4a46-a619-6922b78f7e16" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.1.80:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.578841 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.579537 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jrgss" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.580298 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.580342 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.620551 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.620585 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.620676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.620701 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-jrgss" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.662648 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.662747 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.705638 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podUID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.712849 4687 patch_prober.go:28] interesting pod/monitoring-plugin-dd8c9f9fd-2l56l container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.712905 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" podUID="7e3a243a-64b3-42a8-aa54-63ac6ba629c2" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.93:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.774659 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-pjvcq" podUID="40b2acd2-7fab-41ca-9ba5-7f8a5dc50606" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.806726 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.806856 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.806882 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.806938 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.806954 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.807188 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.809455 4687 patch_prober.go:28] interesting pod/thanos-querier-5d45cd5b67-f9k6z container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.809498 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5d45cd5b67-f9k6z" podUID="d33075ef-9184-4b45-9272-360a19902c6e" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.89:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.809983 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"a8a1a73f16889133cda48ef8456502d7c2a9e455a0098684eb7c44f6e43592a7"} pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.878588 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.959731 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" podUID="1c9c8552-26b0-408f-bd09-40c74041cbfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.968983 4687 patch_prober.go:28] interesting pod/console-6c7b658b6f-wgtnl container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.969067 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6c7b658b6f-wgtnl" podUID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:35 crc kubenswrapper[4687]: I0312 17:18:35.969171 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.093630 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.159091 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.232135 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.232211 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.232277 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.232294 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.232328 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.234106 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"68b689f7cf799083db3dac78348bcfb0ff24f01877b7bce7261a91b72eadfc06"} pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.234150 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" containerID="cri-o://68b689f7cf799083db3dac78348bcfb0ff24f01877b7bce7261a91b72eadfc06" gracePeriod=30 Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.461840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" event={"ID":"69e1152d-3280-4ab7-81dd-dc83f0daa3dc","Type":"ContainerDied","Data":"93ec49e0b993f5e4105fdb7cf5b2578b54c63020aedf051f1ad7f6a94717e934"} Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.469596 4687 generic.go:334] "Generic (PLEG): container finished" podID="69e1152d-3280-4ab7-81dd-dc83f0daa3dc" containerID="93ec49e0b993f5e4105fdb7cf5b2578b54c63020aedf051f1ad7f6a94717e934" exitCode=137 Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.472670 4687 generic.go:334] "Generic (PLEG): container finished" podID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerID="1d48dce0f8ab7cbc5dadae8209ab1312fd6aa0ddd107c12a9e4236abfbf451e3" exitCode=0 Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.472737 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" event={"ID":"235d7664-1ad8-4601-b279-1b8ff86f0bf7","Type":"ContainerDied","Data":"1d48dce0f8ab7cbc5dadae8209ab1312fd6aa0ddd107c12a9e4236abfbf451e3"} Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.474554 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"ac5076fe2d3cb45cf61cdad57a50156afbd41e29f4164ee794084315a8bc3f4e"} pod="metallb-system/speaker-jrgss" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.474623 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" containerID="cri-o://ac5076fe2d3cb45cf61cdad57a50156afbd41e29f4164ee794084315a8bc3f4e" gracePeriod=2 Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.578551 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" podUID="90a35858-7aa2-450f-af1f-9686c8be3863" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.578632 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podUID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.621851 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" podUID="616298f1-0baf-428d-9bb9-3a87f52085e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.622174 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" podUID="066b8087-d58d-4c75-a4bb-4a4b26710855" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.622198 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podUID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.622257 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.622290 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.622386 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.622453 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.664621 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769049 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-9x5hb" podUID="7c43d5a9-eafe-4910-acf5-0502509982b3" containerName="ovn-controller" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769087 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-jwnk7" podUID="24d69a73-06c7-48b5-9479-7816c969dafc" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769127 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-jwnk7" podUID="24d69a73-06c7-48b5-9479-7816c969dafc" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769151 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769145 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-b87795674-d968s" podUID="667072f7-1d8a-4f67-87bb-f587f6384ffd" containerName="heat-engine" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769178 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769049 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-69p8v" podUID="ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769284 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9x5hb" podUID="7c43d5a9-eafe-4910-acf5-0502509982b3" containerName="ovn-controller" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769327 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-jwnk7" podUID="24d69a73-06c7-48b5-9479-7816c969dafc" containerName="ovsdb-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769327 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-69p8v" podUID="ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769407 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769413 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-jwnk7" podUID="24d69a73-06c7-48b5-9479-7816c969dafc" containerName="ovsdb-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769462 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.769592 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.770989 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-b87795674-d968s" podUID="667072f7-1d8a-4f67-87bb-f587f6384ffd" containerName="heat-engine" probeResult="failure" output="command timed out" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.776123 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec"} pod="openstack-operators/openstack-operator-index-l6tqh" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.776157 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" containerID="cri-o://740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec" gracePeriod=30 Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.807530 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.807613 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.841509 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.919655 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podUID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.919732 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podUID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:36 crc kubenswrapper[4687]: I0312 17:18:36.919759 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.069183 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-4xd8n" podUID="1003c23c-a0cb-4878-8399-d7b435084227" containerName="frr" containerID="cri-o://cea997feac1bbb1cbf2f45f0fee666bc0a3a3c04a749e0d26bb1262825aabd12" gracePeriod=2 Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.239354 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.239410 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.239442 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.239463 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.239518 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.240992 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"02dca7bee1b1a57992bccbd0da69a142a5d2da803b27e7a82ac9fc707a34b766"} pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.241031 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" containerID="cri-o://02dca7bee1b1a57992bccbd0da69a142a5d2da803b27e7a82ac9fc707a34b766" gracePeriod=30 Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.464567 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podUID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.464625 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podUID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.464972 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.479168 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.479258 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.488414 4687 generic.go:334] "Generic (PLEG): container finished" podID="1003c23c-a0cb-4878-8399-d7b435084227" containerID="5d2e02721609584bc7882266cb5d8d1b70ab8951a712b80dde05ac9f08cf07ea" exitCode=137 Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.488473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerDied","Data":"5d2e02721609584bc7882266cb5d8d1b70ab8951a712b80dde05ac9f08cf07ea"} Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.490941 4687 generic.go:334] "Generic (PLEG): container finished" podID="80778679-d1d9-4307-990d-7e79bf7ce3f3" containerID="3f496599538811d3982293ecbd6894138a2b44720f612fd4684a48035a4468fc" exitCode=137 Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.490975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hq4lb" event={"ID":"80778679-d1d9-4307-990d-7e79bf7ce3f3","Type":"ContainerDied","Data":"3f496599538811d3982293ecbd6894138a2b44720f612fd4684a48035a4468fc"} Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.665616 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" podUID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.665876 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-jrgss" podUID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.766694 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.766794 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="4db876a5-e8fd-4489-9ad8-2eb862247406" containerName="prometheus" probeResult="failure" output="command timed out" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.773181 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 12 17:18:37 crc kubenswrapper[4687]: I0312 17:18:37.961626 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" podUID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.279789 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.279867 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.75:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.279924 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-87lbf container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.75:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.280041 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-87lbf" podUID="7b24457c-bd41-4df3-95a1-10b69540a4af" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.75:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.414599 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podUID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.507684 4687 generic.go:334] "Generic (PLEG): container finished" podID="1003c23c-a0cb-4878-8399-d7b435084227" containerID="cea997feac1bbb1cbf2f45f0fee666bc0a3a3c04a749e0d26bb1262825aabd12" exitCode=143 Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.507729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerDied","Data":"cea997feac1bbb1cbf2f45f0fee666bc0a3a3c04a749e0d26bb1262825aabd12"} Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.589634 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" podUID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.589681 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.590085 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.589690 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.590139 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.590173 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.590252 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.591458 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"56ab79df6442e13a88cdbc149408e2a44139201ef4aa076b527d1b193d0b36f6"} pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" containerMessage="Container operator failed liveness probe, will be restarted" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.591506 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" containerID="cri-o://56ab79df6442e13a88cdbc149408e2a44139201ef4aa076b527d1b193d0b36f6" gracePeriod=30 Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.630238 4687 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-52fkb container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.630306 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-52fkb" podUID="6a654f3a-4b2a-408c-87c1-908616a79eb3" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.677396 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.677457 4687 patch_prober.go:28] interesting pod/logging-loki-gateway-54f8b9b48b-cfshk container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.677462 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.1.74:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.677506 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-54f8b9b48b-cfshk" podUID="75324694-6fad-4d26-8415-9d7f55ab5c1d" containerName="opa" probeResult="failure" output="Get \"https://10.217.1.74:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.799064 4687 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-jp88k container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.799129 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" podUID="97a0494c-2509-4e76-afd9-fd2be9482d5d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.799205 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.973764 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-g9hdt" podUID="50316ae5-82e3-4dbc-ba50-dd2046abc0e1" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:38 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:38 crc kubenswrapper[4687]: > Mar 12 17:18:38 crc kubenswrapper[4687]: I0312 17:18:38.973784 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-g9hdt" podUID="50316ae5-82e3-4dbc-ba50-dd2046abc0e1" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:38 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:38 crc kubenswrapper[4687]: > Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.185492 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g4lgw container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded" start-of-body= Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.185547 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" podUID="e8e442fd-56a6-49e5-b34d-86331dab75f4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.185881 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g4lgw container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.185902 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-g4lgw" podUID="e8e442fd-56a6-49e5-b34d-86331dab75f4" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.519386 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a" containerID="8757a248e2ac7124948888fc8a6f0c041c843a8a4282aaba501b33958ba5d424" exitCode=1 Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.519435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" event={"ID":"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a","Type":"ContainerDied","Data":"8757a248e2ac7124948888fc8a6f0c041c843a8a4282aaba501b33958ba5d424"} Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.527880 4687 scope.go:117] "RemoveContainer" containerID="8757a248e2ac7124948888fc8a6f0c041c843a8a4282aaba501b33958ba5d424" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.563807 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podUID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.631607 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.631676 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.670067 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:39 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:39 crc kubenswrapper[4687]: > Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.678316 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-lbgkx" podUID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:39 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:39 crc kubenswrapper[4687]: > Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.698839 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.699116 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.698856 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.699609 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.762647 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-pjvcq" podUID="40b2acd2-7fab-41ca-9ba5-7f8a5dc50606" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.800465 4687 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-jp88k container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.800572 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" podUID="97a0494c-2509-4e76-afd9-fd2be9482d5d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.72:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:39 crc kubenswrapper[4687]: E0312 17:18:39.866015 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Mar 12 17:18:39 crc kubenswrapper[4687]: I0312 17:18:39.961457 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-858654f9db-6qgpr" podUID="13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.45:9403/livez\": EOF" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.398024 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.398046 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.398703 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.398767 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.399128 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.399281 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.400432 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"aecc83bfbef9250e89d9d01e014f15ce22c049e694713f4cb1daa13138b3348e"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.400500 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" containerID="cri-o://aecc83bfbef9250e89d9d01e014f15ce22c049e694713f4cb1daa13138b3348e" gracePeriod=30 Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.479585 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.479643 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.532776 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-hq4lb" event={"ID":"80778679-d1d9-4307-990d-7e79bf7ce3f3","Type":"ContainerStarted","Data":"d6cdab55cb823ab78ce004d5675de6765f955c7deca15c0238b8e34c04914f6d"} Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.533451 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.540207 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.542163 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.545486 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.545540 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8ffe1bad6177828312ae387f3863013072ae2ccf765bcff5c2a404c7098c82fb" exitCode=1 Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.545600 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8ffe1bad6177828312ae387f3863013072ae2ccf765bcff5c2a404c7098c82fb"} Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.545635 4687 scope.go:117] "RemoveContainer" containerID="7f437980117f5d03ea8163287190b3dbb86c3c9545a22d9e2e4bfe7abae9300d" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.547206 4687 scope.go:117] "RemoveContainer" containerID="8ffe1bad6177828312ae387f3863013072ae2ccf765bcff5c2a404c7098c82fb" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.550191 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" event={"ID":"69e1152d-3280-4ab7-81dd-dc83f0daa3dc","Type":"ContainerStarted","Data":"ea8cbb7693794dda93f220310c47072245591d63d89d164c8fd26aa61fb77597"} Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.550332 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.553149 4687 generic.go:334] "Generic (PLEG): container finished" podID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerID="cf9596c852cd6aaa695559bf7df46289adc3a7a5797d706c4aad529f14f76a59" exitCode=0 Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.553221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" event={"ID":"8dd77a62-4c2f-409c-af2a-bac22eb53ddc","Type":"ContainerDied","Data":"cf9596c852cd6aaa695559bf7df46289adc3a7a5797d706c4aad529f14f76a59"} Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.560976 4687 generic.go:334] "Generic (PLEG): container finished" podID="00c4362c-6a07-47c7-a60a-bbaf5b9f0260" containerID="ac5076fe2d3cb45cf61cdad57a50156afbd41e29f4164ee794084315a8bc3f4e" exitCode=137 Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.561080 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jrgss" event={"ID":"00c4362c-6a07-47c7-a60a-bbaf5b9f0260","Type":"ContainerDied","Data":"ac5076fe2d3cb45cf61cdad57a50156afbd41e29f4164ee794084315a8bc3f4e"} Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.564606 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" event={"ID":"235d7664-1ad8-4601-b279-1b8ff86f0bf7","Type":"ContainerStarted","Data":"682b9d43f83e396047758e876dd877f7aa876da50610e3365d8e0cd0cd3c7957"} Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.565047 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.565403 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.565441 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.573521 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:40 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:40 crc kubenswrapper[4687]: > Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.578820 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-lbgkx" podUID="5d2daa11-2756-4a7c-860a-44c13ab92d91" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:40 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:40 crc kubenswrapper[4687]: > Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.580052 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podUID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.586279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.593121 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"1abb5a0d54b93d0d6e19e379d5ec06228118f4f5f61129efa50204f956096060"} pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.593545 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" podUID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerName="hostpath-provisioner" containerID="cri-o://1abb5a0d54b93d0d6e19e379d5ec06228118f4f5f61129efa50204f956096060" gracePeriod=30 Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.834013 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.834396 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.834160 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:40 crc kubenswrapper[4687]: I0312 17:18:40.834746 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.117181 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.117459 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.123521 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.211687 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.211764 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.211892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.212484 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.212548 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.212599 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.213808 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"4baab1dd0a0b0fc03c7a028f48604321ec4f893a91b34959f951173a8838c3f3"} pod="openshift-console-operator/console-operator-58897d9998-hfzwb" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.213893 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" containerID="cri-o://4baab1dd0a0b0fc03c7a028f48604321ec4f893a91b34959f951173a8838c3f3" gracePeriod=30 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.321730 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.321725 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.322237 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.322287 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.322350 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.322399 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.323831 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f7b0b853c1333dc4fae6bb0a7fdfafa94ee5511f908f6305a91173e9883fe190"} pod="openshift-console/downloads-7954f5f757-crmcv" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.323893 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" containerID="cri-o://f7b0b853c1333dc4fae6bb0a7fdfafa94ee5511f908f6305a91173e9883fe190" gracePeriod=2 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.428605 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.428679 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.428744 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.428819 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.428686 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429243 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429347 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429398 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429421 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429420 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429357 4687 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2dm8z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429464 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" podUID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429763 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"402a799402a1ca727253a54ae7b6e6b2357127c4960de5bb4c7d0bd9d16ebfab"} pod="openshift-ingress/router-default-5444994796-55qf7" containerMessage="Container router failed liveness probe, will be restarted" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.429800 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" containerID="cri-o://402a799402a1ca727253a54ae7b6e6b2357127c4960de5bb4c7d0bd9d16ebfab" gracePeriod=10 Mar 12 17:18:41 crc kubenswrapper[4687]: E0312 17:18:41.490088 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec is running failed: container process not found" containerID="740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec" cmd=["grpc_health_probe","-addr=:50051"] Mar 12 17:18:41 crc kubenswrapper[4687]: E0312 17:18:41.490522 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec is running failed: container process not found" containerID="740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec" cmd=["grpc_health_probe","-addr=:50051"] Mar 12 17:18:41 crc kubenswrapper[4687]: E0312 17:18:41.491109 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec is running failed: container process not found" containerID="740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec" cmd=["grpc_health_probe","-addr=:50051"] Mar 12 17:18:41 crc kubenswrapper[4687]: E0312 17:18:41.491146 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-l6tqh" podUID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerName="registry-server" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.512662 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.578713 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd381e8d-4f1d-48ef-b8a7-b10f6c97b334" containerID="740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec" exitCode=0 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.578800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l6tqh" event={"ID":"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334","Type":"ContainerDied","Data":"740873a3f928dcc6f356646a16665cd07a4f2168368450bc1a3830d0342e47ec"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.582482 4687 generic.go:334] "Generic (PLEG): container finished" podID="616298f1-0baf-428d-9bb9-3a87f52085e8" containerID="51e26485b980245b59b03319ab39f39fe09888ced297f429fda717a1efc3c661" exitCode=1 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.582511 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" event={"ID":"616298f1-0baf-428d-9bb9-3a87f52085e8","Type":"ContainerDied","Data":"51e26485b980245b59b03319ab39f39fe09888ced297f429fda717a1efc3c661"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.585115 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.586498 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.586948 4687 scope.go:117] "RemoveContainer" containerID="51e26485b980245b59b03319ab39f39fe09888ced297f429fda717a1efc3c661" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.588724 4687 generic.go:334] "Generic (PLEG): container finished" podID="1c9c8552-26b0-408f-bd09-40c74041cbfa" containerID="1e24499b81deef7d1562e7ac6bf1ec71e792987d29194e0eaf180ae0ab82bbe3" exitCode=1 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.588768 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" event={"ID":"1c9c8552-26b0-408f-bd09-40c74041cbfa","Type":"ContainerDied","Data":"1e24499b81deef7d1562e7ac6bf1ec71e792987d29194e0eaf180ae0ab82bbe3"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.589307 4687 scope.go:117] "RemoveContainer" containerID="1e24499b81deef7d1562e7ac6bf1ec71e792987d29194e0eaf180ae0ab82bbe3" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.598684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"f880365b9fe9f238085fdb4ceed8a4c2d12c8cefbd89a59a9a7d4a46c1074a54"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.598726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xd8n" event={"ID":"1003c23c-a0cb-4878-8399-d7b435084227","Type":"ContainerStarted","Data":"d7a84b3d177f5cfeeafcc27ae3de6ac7cefa9232280b1ac4a79567a235a88c79"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.600596 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.611052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jrgss" event={"ID":"00c4362c-6a07-47c7-a60a-bbaf5b9f0260","Type":"ContainerStarted","Data":"bc4e60b0b47d6144058fdb668a7b66387d417533bd307cdb0cc6c8cc5c7098d5"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.611209 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jrgss" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.614329 4687 generic.go:334] "Generic (PLEG): container finished" podID="4568a6b6-c008-4ea7-abec-b824324732d3" containerID="29aece1dd0a74de50c22be1e7df767e9f8f6d6ffc185144dd9aa6a81a345b34b" exitCode=1 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.614395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" event={"ID":"4568a6b6-c008-4ea7-abec-b824324732d3","Type":"ContainerDied","Data":"29aece1dd0a74de50c22be1e7df767e9f8f6d6ffc185144dd9aa6a81a345b34b"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.616106 4687 scope.go:117] "RemoveContainer" containerID="29aece1dd0a74de50c22be1e7df767e9f8f6d6ffc185144dd9aa6a81a345b34b" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.618150 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" event={"ID":"8dd77a62-4c2f-409c-af2a-bac22eb53ddc","Type":"ContainerStarted","Data":"7724da077600a4cd365fba0a85b32a38a1bd17309f1cabfd07b28e31e0e1e049"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.618407 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.618719 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.618765 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.622836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" event={"ID":"0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a","Type":"ContainerStarted","Data":"4db7926e2476360728635448d7d0c5828fb98e8c51c48fed9a91eafd04756506"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.622978 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.625526 4687 generic.go:334] "Generic (PLEG): container finished" podID="39ab069c-1ccd-4ad4-b4ea-b71b1b09472f" containerID="fb51c3ab3c7abec2b69974709825627059760e54c31d98baf02f04d3fe1273d3" exitCode=1 Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.625815 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" event={"ID":"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f","Type":"ContainerDied","Data":"fb51c3ab3c7abec2b69974709825627059760e54c31d98baf02f04d3fe1273d3"} Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.626582 4687 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.626624 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" podUID="235d7664-1ad8-4601-b279-1b8ff86f0bf7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.626634 4687 scope.go:117] "RemoveContainer" containerID="fb51c3ab3c7abec2b69974709825627059760e54c31d98baf02f04d3fe1273d3" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.734492 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:18:41 crc kubenswrapper[4687]: E0312 17:18:41.736254 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.761923 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.762060 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.778861 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.778965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.840536 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:41 crc kubenswrapper[4687]: I0312 17:18:41.841035 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1d3c2a08-d60e-4b86-858d-f5ac038f566e" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.168:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.058591 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.111732 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.241422 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:42 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:42 crc kubenswrapper[4687]: > Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.253699 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.253757 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.363589 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.363667 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.470542 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.470629 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.513320 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.557572 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.570600 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.630992 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.639943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" event={"ID":"39ab069c-1ccd-4ad4-b4ea-b71b1b09472f","Type":"ContainerStarted","Data":"9edeead75183451347b5803b4c2803eae2e588a1047acc3e26abc8b46b75f85e"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.640183 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.643170 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" event={"ID":"1c9c8552-26b0-408f-bd09-40c74041cbfa","Type":"ContainerStarted","Data":"e3de6072a46f947de29fd0213b73f0f9c70113e0b4873c59169eed0b90df5639"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.643956 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.650136 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.654188 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.654278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b7092c47a206206798d209e32a11d14697bf72c0df329f5e6bc96891cfe97e4"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.669846 4687 generic.go:334] "Generic (PLEG): container finished" podID="afacf716-028a-4848-a495-83f7c01a47ca" containerID="56ab79df6442e13a88cdbc149408e2a44139201ef4aa076b527d1b193d0b36f6" exitCode=0 Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.669940 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" event={"ID":"afacf716-028a-4848-a495-83f7c01a47ca","Type":"ContainerDied","Data":"56ab79df6442e13a88cdbc149408e2a44139201ef4aa076b527d1b193d0b36f6"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.676115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l6tqh" event={"ID":"dd381e8d-4f1d-48ef-b8a7-b10f6c97b334","Type":"ContainerStarted","Data":"028e9d32bc1a8bd866abe01e2346066ab00c993f7da5bdb4ae84b2c8e6759f2b"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.679325 4687 generic.go:334] "Generic (PLEG): container finished" podID="9af65423-8d26-4ff5-97ee-711dc0c4501b" containerID="95c4343e2035e1cd9235e1c34dd0cb452fb831329d867670f2d8eb04c1b2f4fd" exitCode=1 Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.679424 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" event={"ID":"9af65423-8d26-4ff5-97ee-711dc0c4501b","Type":"ContainerDied","Data":"95c4343e2035e1cd9235e1c34dd0cb452fb831329d867670f2d8eb04c1b2f4fd"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.680203 4687 scope.go:117] "RemoveContainer" containerID="95c4343e2035e1cd9235e1c34dd0cb452fb831329d867670f2d8eb04c1b2f4fd" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.682111 4687 generic.go:334] "Generic (PLEG): container finished" podID="65afd209-a452-442f-853d-d2e062fa2530" containerID="c3c9cc0876bab18f2073cc3f852cfd5f673361faebddbcc40d4b4a0dfe2a1330" exitCode=1 Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.682198 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" event={"ID":"65afd209-a452-442f-853d-d2e062fa2530","Type":"ContainerDied","Data":"c3c9cc0876bab18f2073cc3f852cfd5f673361faebddbcc40d4b4a0dfe2a1330"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.684775 4687 scope.go:117] "RemoveContainer" containerID="c3c9cc0876bab18f2073cc3f852cfd5f673361faebddbcc40d4b4a0dfe2a1330" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.692520 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerID="aecc83bfbef9250e89d9d01e014f15ce22c049e694713f4cb1daa13138b3348e" exitCode=0 Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.692612 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" event={"ID":"bc236d4c-8f96-413b-a300-ddb9d524fd23","Type":"ContainerDied","Data":"aecc83bfbef9250e89d9d01e014f15ce22c049e694713f4cb1daa13138b3348e"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.692676 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" event={"ID":"bc236d4c-8f96-413b-a300-ddb9d524fd23","Type":"ContainerStarted","Data":"2a08847d3deb605a108fc55de44067697d3fb9d78bd3a6567b87a146ae356e13"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.692700 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.693189 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": dial tcp 10.217.0.84:8443: connect: connection refused" start-of-body= Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.693250 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": dial tcp 10.217.0.84:8443: connect: connection refused" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.695094 4687 generic.go:334] "Generic (PLEG): container finished" podID="79d0c51f-999a-4e39-b6b5-aecf10472a4c" containerID="f2a1077fa45434f636929cc923217bbb5814ca4b87306c2a7a917546f21a4b47" exitCode=1 Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.695124 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" event={"ID":"79d0c51f-999a-4e39-b6b5-aecf10472a4c","Type":"ContainerDied","Data":"f2a1077fa45434f636929cc923217bbb5814ca4b87306c2a7a917546f21a4b47"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.696907 4687 scope.go:117] "RemoveContainer" containerID="f2a1077fa45434f636929cc923217bbb5814ca4b87306c2a7a917546f21a4b47" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.702247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" event={"ID":"616298f1-0baf-428d-9bb9-3a87f52085e8","Type":"ContainerStarted","Data":"9d5d87251e53b8b788a46d2b27201e32bc6c7a6e837f8b497a565fe8e18032b9"} Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.702497 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.703936 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6vp4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.703993 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" podUID="8dd77a62-4c2f-409c-af2a-bac22eb53ddc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.704902 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.755115 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.764084 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.766004 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.766331 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.770097 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.770225 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.780869 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"8ecd462191420eb461ff19f9a631dcae15360fcf3606ec918dab0aaa3792744e"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 12 17:18:42 crc kubenswrapper[4687]: I0312 17:18:42.934025 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pjvcq" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.062796 4687 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-8k59p container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.063141 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" podUID="33969389-2dd2-4c4b-ae70-d6e71f0fdf14" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.1.71:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.153713 4687 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-6zsz9 container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.153765 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" podUID="bc2003f0-4f8e-4e59-8a1a-dd7be452b232" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.1.72:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.176040 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-bxc6s" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.383111 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.383296 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" podUID="26adb4e9-0197-4023-b876-afbb572f93d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": dial tcp 10.217.0.104:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.474402 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": dial tcp 10.217.0.107:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.474418 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" podUID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": dial tcp 10.217.0.107:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.479075 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.479126 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.671507 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": dial tcp 10.217.0.111:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.671557 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": dial tcp 10.217.0.111:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.687937 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.687991 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.701139 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podUID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": dial tcp 10.217.0.105:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.701592 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podUID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": dial tcp 10.217.0.105:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.701705 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.702730 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" podUID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": dial tcp 10.217.0.105:8081: connect: connection refused" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.765991 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" probeResult="failure" output="command timed out" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.849860 4687 generic.go:334] "Generic (PLEG): container finished" podID="15c585dd-9efa-430b-aeb5-42eaeace0d18" containerID="14d9968e147b5fbf2f9417d1b2050c63de24bbf68ddf7898fd051e857f919df9" exitCode=1 Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.879624 4687 generic.go:334] "Generic (PLEG): container finished" podID="0fc40919-64b1-4b8c-ab92-b9297cb5c352" containerID="5f00b6693c66e586ef439145ff3c614b9879d6d9457c9862982e374cedca0601" exitCode=1 Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.882157 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.882184 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" event={"ID":"15c585dd-9efa-430b-aeb5-42eaeace0d18","Type":"ContainerDied","Data":"14d9968e147b5fbf2f9417d1b2050c63de24bbf68ddf7898fd051e857f919df9"} Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.882205 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" event={"ID":"0fc40919-64b1-4b8c-ab92-b9297cb5c352","Type":"ContainerDied","Data":"5f00b6693c66e586ef439145ff3c614b9879d6d9457c9862982e374cedca0601"} Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.882930 4687 scope.go:117] "RemoveContainer" containerID="5f00b6693c66e586ef439145ff3c614b9879d6d9457c9862982e374cedca0601" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.883611 4687 scope.go:117] "RemoveContainer" containerID="14d9968e147b5fbf2f9417d1b2050c63de24bbf68ddf7898fd051e857f919df9" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.893009 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.893064 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.911582 4687 generic.go:334] "Generic (PLEG): container finished" podID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerID="eaceb2e7b8d8cb691ad8727c13985df5b57e128f9a64b0c62dbab49e958c946b" exitCode=1 Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.911671 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" event={"ID":"7569aa35-67ce-43f4-8e4c-f851973745d9","Type":"ContainerDied","Data":"eaceb2e7b8d8cb691ad8727c13985df5b57e128f9a64b0c62dbab49e958c946b"} Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.912724 4687 scope.go:117] "RemoveContainer" containerID="eaceb2e7b8d8cb691ad8727c13985df5b57e128f9a64b0c62dbab49e958c946b" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.925942 4687 generic.go:334] "Generic (PLEG): container finished" podID="64a70e69-432d-4ddc-8eef-e16f4e374c56" containerID="fcc55729a0a9ac125437dd5f4e28efbdf0ebbed6d6e5c8f231a516b851e1bdcf" exitCode=1 Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.926050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" event={"ID":"64a70e69-432d-4ddc-8eef-e16f4e374c56","Type":"ContainerDied","Data":"fcc55729a0a9ac125437dd5f4e28efbdf0ebbed6d6e5c8f231a516b851e1bdcf"} Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.927064 4687 scope.go:117] "RemoveContainer" containerID="fcc55729a0a9ac125437dd5f4e28efbdf0ebbed6d6e5c8f231a516b851e1bdcf" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.957519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" event={"ID":"4568a6b6-c008-4ea7-abec-b824324732d3","Type":"ContainerStarted","Data":"ea9034ae9e3c0ba497c6c609e3d0205c3eeaa1cacae666b2235a40d66af593bc"} Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.959973 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.979705 4687 generic.go:334] "Generic (PLEG): container finished" podID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerID="02dca7bee1b1a57992bccbd0da69a142a5d2da803b27e7a82ac9fc707a34b766" exitCode=0 Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.979779 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" event={"ID":"bfe3820e-9164-4f99-a317-95c69aa4df0e","Type":"ContainerDied","Data":"02dca7bee1b1a57992bccbd0da69a142a5d2da803b27e7a82ac9fc707a34b766"} Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.989628 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-hfzwb_ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe/console-operator/0.log" Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.989686 4687 generic.go:334] "Generic (PLEG): container finished" podID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerID="4baab1dd0a0b0fc03c7a028f48604321ec4f893a91b34959f951173a8838c3f3" exitCode=1 Mar 12 17:18:43 crc kubenswrapper[4687]: I0312 17:18:43.989779 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" event={"ID":"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe","Type":"ContainerDied","Data":"4baab1dd0a0b0fc03c7a028f48604321ec4f893a91b34959f951173a8838c3f3"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.004034 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": dial tcp 10.217.0.117:8081: connect: connection refused" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.004230 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" podUID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.005114 4687 generic.go:334] "Generic (PLEG): container finished" podID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerID="354797209cfe02dc3f0011a03dec290eba9555e6c28177d4f4df6e4f0a582f8f" exitCode=1 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.005167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" event={"ID":"43fc4c76-a11e-4403-81b5-ee741b3c2a63","Type":"ContainerDied","Data":"354797209cfe02dc3f0011a03dec290eba9555e6c28177d4f4df6e4f0a582f8f"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.006160 4687 scope.go:117] "RemoveContainer" containerID="354797209cfe02dc3f0011a03dec290eba9555e6c28177d4f4df6e4f0a582f8f" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.029759 4687 generic.go:334] "Generic (PLEG): container finished" podID="dc5ebdf2-e54a-4c66-abb7-35039f9226dc" containerID="75b87336dba32d8044793a9adca94371c7650a3a21bb8352c59aa2893cafae65" exitCode=1 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.029840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" event={"ID":"dc5ebdf2-e54a-4c66-abb7-35039f9226dc","Type":"ContainerDied","Data":"75b87336dba32d8044793a9adca94371c7650a3a21bb8352c59aa2893cafae65"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.030746 4687 scope.go:117] "RemoveContainer" containerID="75b87336dba32d8044793a9adca94371c7650a3a21bb8352c59aa2893cafae65" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.035544 4687 generic.go:334] "Generic (PLEG): container finished" podID="43d0733d-5a4f-4b51-a95e-eb2cf8593545" containerID="2fad8e90d08a0e61af0ff788e46212b70d203f4849c7aefa8a4806e3a834b435" exitCode=1 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.035614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" event={"ID":"43d0733d-5a4f-4b51-a95e-eb2cf8593545","Type":"ContainerDied","Data":"2fad8e90d08a0e61af0ff788e46212b70d203f4849c7aefa8a4806e3a834b435"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.036285 4687 scope.go:117] "RemoveContainer" containerID="2fad8e90d08a0e61af0ff788e46212b70d203f4849c7aefa8a4806e3a834b435" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.044206 4687 generic.go:334] "Generic (PLEG): container finished" podID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerID="2aa27c071ee14d55ab24cea39beeae927da8dff3e369144a2731dff4c1cfbb59" exitCode=0 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.044282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerDied","Data":"2aa27c071ee14d55ab24cea39beeae927da8dff3e369144a2731dff4c1cfbb59"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.067480 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6e1c4361fcbf38a2eed0fcf8d9de9c162cd6c7c7d295f05bd4df892bbc584ad9" exitCode=0 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.067573 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6e1c4361fcbf38a2eed0fcf8d9de9c162cd6c7c7d295f05bd4df892bbc584ad9"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.075885 4687 generic.go:334] "Generic (PLEG): container finished" podID="26adb4e9-0197-4023-b876-afbb572f93d8" containerID="432d2d1d0d1a7ab768a27590fdf60bb1273314361a17bdb5390cebf75e31ef9a" exitCode=1 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.075985 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" event={"ID":"26adb4e9-0197-4023-b876-afbb572f93d8","Type":"ContainerDied","Data":"432d2d1d0d1a7ab768a27590fdf60bb1273314361a17bdb5390cebf75e31ef9a"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.076913 4687 scope.go:117] "RemoveContainer" containerID="432d2d1d0d1a7ab768a27590fdf60bb1273314361a17bdb5390cebf75e31ef9a" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.106993 4687 generic.go:334] "Generic (PLEG): container finished" podID="5046d5fc-693f-47bc-bae2-c3430c7e6b24" containerID="2b22319cb9c7470250fb1e55c8c0bbae56ff74c821fe846297bfda68bc1e516a" exitCode=0 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.107073 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" event={"ID":"5046d5fc-693f-47bc-bae2-c3430c7e6b24","Type":"ContainerDied","Data":"2b22319cb9c7470250fb1e55c8c0bbae56ff74c821fe846297bfda68bc1e516a"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.120089 4687 generic.go:334] "Generic (PLEG): container finished" podID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerID="f7b0b853c1333dc4fae6bb0a7fdfafa94ee5511f908f6305a91173e9883fe190" exitCode=0 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.120178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-crmcv" event={"ID":"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8","Type":"ContainerDied","Data":"f7b0b853c1333dc4fae6bb0a7fdfafa94ee5511f908f6305a91173e9883fe190"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.130939 4687 generic.go:334] "Generic (PLEG): container finished" podID="a1680cce-a286-460f-9e3f-145d9b364995" containerID="2194eae52541904551456546389f27d22048c7a40614c3e06c18fe979e6a2432" exitCode=0 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.131039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" event={"ID":"a1680cce-a286-460f-9e3f-145d9b364995","Type":"ContainerDied","Data":"2194eae52541904551456546389f27d22048c7a40614c3e06c18fe979e6a2432"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.146132 4687 generic.go:334] "Generic (PLEG): container finished" podID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerID="68b689f7cf799083db3dac78348bcfb0ff24f01877b7bce7261a91b72eadfc06" exitCode=0 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.146420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" event={"ID":"2bc5989f-becf-4c6b-87ac-89e327bd07b6","Type":"ContainerDied","Data":"68b689f7cf799083db3dac78348bcfb0ff24f01877b7bce7261a91b72eadfc06"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.155780 4687 generic.go:334] "Generic (PLEG): container finished" podID="90a35858-7aa2-450f-af1f-9686c8be3863" containerID="b763c0701f82c0d08c9a41a07ec8941cbe64802cbf8f979f95997fbc09d27bab" exitCode=1 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.155882 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" event={"ID":"90a35858-7aa2-450f-af1f-9686c8be3863","Type":"ContainerDied","Data":"b763c0701f82c0d08c9a41a07ec8941cbe64802cbf8f979f95997fbc09d27bab"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.156765 4687 scope.go:117] "RemoveContainer" containerID="b763c0701f82c0d08c9a41a07ec8941cbe64802cbf8f979f95997fbc09d27bab" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.176322 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-dd8c9f9fd-2l56l" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.176853 4687 generic.go:334] "Generic (PLEG): container finished" podID="066b8087-d58d-4c75-a4bb-4a4b26710855" containerID="067176c31217f6764be8756d35108369c6e2f6959023a80443d3d0a837766b1c" exitCode=1 Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.178180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" event={"ID":"066b8087-d58d-4c75-a4bb-4a4b26710855","Type":"ContainerDied","Data":"067176c31217f6764be8756d35108369c6e2f6959023a80443d3d0a837766b1c"} Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.178559 4687 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-2xmr9 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:8443/healthz\": dial tcp 10.217.0.84:8443: connect: connection refused" start-of-body= Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.178596 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" podUID="bc236d4c-8f96-413b-a300-ddb9d524fd23" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.84:8443/healthz\": dial tcp 10.217.0.84:8443: connect: connection refused" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.179142 4687 scope.go:117] "RemoveContainer" containerID="067176c31217f6764be8756d35108369c6e2f6959023a80443d3d0a837766b1c" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.197082 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podUID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": read tcp 10.217.0.2:40716->10.217.0.119:8081: read: connection reset by peer" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.197525 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.197319 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podUID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": read tcp 10.217.0.2:40704->10.217.0.119:8081: read: connection reset by peer" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.198226 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" podUID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": dial tcp 10.217.0.119:8081: connect: connection refused" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.284575 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.304877 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.305175 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.340316 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.474020 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": dial tcp 10.217.0.124:8081: connect: connection refused" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.474043 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" podUID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": dial tcp 10.217.0.124:8081: connect: connection refused" Mar 12 17:18:44 crc kubenswrapper[4687]: I0312 17:18:44.880675 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.176434 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.207465 4687 generic.go:334] "Generic (PLEG): container finished" podID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerID="7dcb12d2908d034a6d179f551bd3104d15b00ece776cf8bc2033d69e8d445805" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.207570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" event={"ID":"718e95dd-fb86-4403-8048-d68f1f23d3ca","Type":"ContainerDied","Data":"7dcb12d2908d034a6d179f551bd3104d15b00ece776cf8bc2033d69e8d445805"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.208304 4687 scope.go:117] "RemoveContainer" containerID="7dcb12d2908d034a6d179f551bd3104d15b00ece776cf8bc2033d69e8d445805" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.211945 4687 generic.go:334] "Generic (PLEG): container finished" podID="a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7" containerID="ab72c1912098b013ef8ddd51c9c1e28e87f56b5067506b72729e257a96d0740c" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.212009 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" event={"ID":"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7","Type":"ContainerDied","Data":"ab72c1912098b013ef8ddd51c9c1e28e87f56b5067506b72729e257a96d0740c"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.212866 4687 scope.go:117] "RemoveContainer" containerID="ab72c1912098b013ef8ddd51c9c1e28e87f56b5067506b72729e257a96d0740c" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.216490 4687 generic.go:334] "Generic (PLEG): container finished" podID="af6289c5-2a9a-4429-96d6-3c7bbff706e0" containerID="a293dffacc759d48161734e7b56274b324c76359d177228a99bdd6bab039aef8" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.216545 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" event={"ID":"af6289c5-2a9a-4429-96d6-3c7bbff706e0","Type":"ContainerDied","Data":"a293dffacc759d48161734e7b56274b324c76359d177228a99bdd6bab039aef8"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.217246 4687 scope.go:117] "RemoveContainer" containerID="a293dffacc759d48161734e7b56274b324c76359d177228a99bdd6bab039aef8" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.222659 4687 generic.go:334] "Generic (PLEG): container finished" podID="1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0" containerID="c312668402caae941cf03787fc88f56e7fea4b1c85ce04bf64079be16ee3b007" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.222717 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" event={"ID":"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0","Type":"ContainerDied","Data":"c312668402caae941cf03787fc88f56e7fea4b1c85ce04bf64079be16ee3b007"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.223547 4687 scope.go:117] "RemoveContainer" containerID="c312668402caae941cf03787fc88f56e7fea4b1c85ce04bf64079be16ee3b007" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.234018 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.234074 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.239642 4687 generic.go:334] "Generic (PLEG): container finished" podID="ecc97932-9eae-4d08-910b-b68e0e7d8002" containerID="c48ba5164766a452e3b04daf316a03727777eb478b9c5f50a19ce060413235b5" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.239796 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" event={"ID":"ecc97932-9eae-4d08-910b-b68e0e7d8002","Type":"ContainerDied","Data":"c48ba5164766a452e3b04daf316a03727777eb478b9c5f50a19ce060413235b5"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.240664 4687 scope.go:117] "RemoveContainer" containerID="c48ba5164766a452e3b04daf316a03727777eb478b9c5f50a19ce060413235b5" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.248390 4687 generic.go:334] "Generic (PLEG): container finished" podID="85d59f34-51a3-4c41-836e-9cc32f5da5e4" containerID="8dbf42c1a1eeb118ef1bf8e6b239d81bdecb767fc0e497d1aae3d794b8b381df" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.248465 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" event={"ID":"85d59f34-51a3-4c41-836e-9cc32f5da5e4","Type":"ContainerDied","Data":"8dbf42c1a1eeb118ef1bf8e6b239d81bdecb767fc0e497d1aae3d794b8b381df"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.267513 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerID="7ccb120aeebfe27d31cad2d606816a1479f4c9372a9527cc60301cc3f2f78534" exitCode=0 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.267663 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" event={"ID":"3dd72ea9-e99c-4d99-914c-6984e54ee89f","Type":"ContainerDied","Data":"7ccb120aeebfe27d31cad2d606816a1479f4c9372a9527cc60301cc3f2f78534"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.278025 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2" containerID="178e2c7cfb9f0f9009884b4440ce2bbebbe79dfff88909ca6e9fd76c7fea6846" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.278117 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" event={"ID":"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2","Type":"ContainerDied","Data":"178e2c7cfb9f0f9009884b4440ce2bbebbe79dfff88909ca6e9fd76c7fea6846"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.282033 4687 generic.go:334] "Generic (PLEG): container finished" podID="63f537cc-6a26-4a05-9b17-80549297e9f2" containerID="44b24b3a9e85731fb3c4fcb7090ce068ea1760aa05eaab34d3f6827001819da1" exitCode=1 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.282096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" event={"ID":"63f537cc-6a26-4a05-9b17-80549297e9f2","Type":"ContainerDied","Data":"44b24b3a9e85731fb3c4fcb7090ce068ea1760aa05eaab34d3f6827001819da1"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.294489 4687 generic.go:334] "Generic (PLEG): container finished" podID="f865d3a8-d05b-47c7-a131-31849e5d82ad" containerID="b56384eb952f96a537f2ae1da600aba3a28f63a5470f329822f66cf329002e86" exitCode=0 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.294581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" event={"ID":"f865d3a8-d05b-47c7-a131-31849e5d82ad","Type":"ContainerDied","Data":"b56384eb952f96a537f2ae1da600aba3a28f63a5470f329822f66cf329002e86"} Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.300243 4687 scope.go:117] "RemoveContainer" containerID="8dbf42c1a1eeb118ef1bf8e6b239d81bdecb767fc0e497d1aae3d794b8b381df" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.300631 4687 scope.go:117] "RemoveContainer" containerID="178e2c7cfb9f0f9009884b4440ce2bbebbe79dfff88909ca6e9fd76c7fea6846" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.302115 4687 scope.go:117] "RemoveContainer" containerID="44b24b3a9e85731fb3c4fcb7090ce068ea1760aa05eaab34d3f6827001819da1" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.587771 4687 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.588092 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.845193 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerName="galera" containerID="cri-o://8ecd462191420eb461ff19f9a631dcae15360fcf3606ec918dab0aaa3792744e" gracePeriod=27 Mar 12 17:18:45 crc kubenswrapper[4687]: I0312 17:18:45.996437 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" containerID="cri-o://4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338" gracePeriod=27 Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.238994 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.239037 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.347079 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5hwpl" event={"ID":"5046d5fc-693f-47bc-bae2-c3430c7e6b24","Type":"ContainerStarted","Data":"bfc5d7578486a19fec91919d66bc104faa0eb63f2f417dc118d1c12cf964ab69"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.374829 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" event={"ID":"65afd209-a452-442f-853d-d2e062fa2530","Type":"ContainerStarted","Data":"1d75934bea0a3008168e74f7f6ab091b77bf6398b090240546fd2506a04d0c76"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.375876 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.397932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-crmcv" event={"ID":"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8","Type":"ContainerStarted","Data":"4e15f45c9701823172bb976245033dc044bd4b55bdf8f5630a9980467fd3a5f7"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.398701 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.398810 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.398846 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.421087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" event={"ID":"a1680cce-a286-460f-9e3f-145d9b364995","Type":"ContainerStarted","Data":"317092c0679bb753056cd8aa1f1aa68f532a55a0833fe7bf9f2e0a4866e952fd"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.421271 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.439104 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": dial tcp 10.217.0.78:8080: connect: connection refused" start-of-body= Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.439141 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": dial tcp 10.217.0.78:8080: connect: connection refused" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.469866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"be90914550ac1d4f1fe09e7c45c11f321b96ac1c385aab603d9627246cce5246"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.470423 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.483502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" event={"ID":"79d0c51f-999a-4e39-b6b5-aecf10472a4c","Type":"ContainerStarted","Data":"763b30c51500e08061f686144e51ab2146bb9d391ac44766daeffef394845a6e"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.483839 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.484870 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.484923 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.492923 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" event={"ID":"bfe3820e-9164-4f99-a317-95c69aa4df0e","Type":"ContainerStarted","Data":"c24cb99509bdeeaf865c2d71dc3a7b0680b7520a3523a2512da1fa2422a9571f"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.493097 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.498674 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.498725 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.501020 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" event={"ID":"26adb4e9-0197-4023-b876-afbb572f93d8","Type":"ContainerStarted","Data":"14ccdbaa4fbedd89694f4cfaa529bd1aec04013ce205351c76a437709ff876e0"} Mar 12 17:18:46 crc kubenswrapper[4687]: I0312 17:18:46.501938 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.483442 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.491942 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.492228 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.578704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" event={"ID":"3dd72ea9-e99c-4d99-914c-6984e54ee89f","Type":"ContainerStarted","Data":"860bed6be2acf05cf43ebd0cccce57fc108563e949e5ac7a3710d1d4f737d558"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.579235 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.609705 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-g4z6z" event={"ID":"bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2","Type":"ContainerStarted","Data":"0b265e5b59cc2228e2219a1cefb8465ca01feae7964f30935f2af450558bc573"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.623320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" event={"ID":"43d0733d-5a4f-4b51-a95e-eb2cf8593545","Type":"ContainerStarted","Data":"1f7e40937a904f54c6d919368b4075e90e5ae4ee1aeeecff4b92e835f1d9706d"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.624504 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.628744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" event={"ID":"af6289c5-2a9a-4429-96d6-3c7bbff706e0","Type":"ContainerStarted","Data":"bfa633a5a94c2ea5d5149f06be295b5f46ed643dc87b976d58de0d7690a5203b"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.629304 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.636352 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" event={"ID":"15c585dd-9efa-430b-aeb5-42eaeace0d18","Type":"ContainerStarted","Data":"2c0bcc974c04235e476bc6f234d08d227bec61a13e3cd4c4d29ecf2985f16a4a"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.637269 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.637434 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhdjt\" (UniqueName: \"kubernetes.io/projected/51edd34d-2c01-4595-ae73-242d69a16c19-kube-api-access-rhdjt\") pod \"51edd34d-2c01-4595-ae73-242d69a16c19\" (UID: \"51edd34d-2c01-4595-ae73-242d69a16c19\") " Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.640118 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" event={"ID":"9af65423-8d26-4ff5-97ee-711dc0c4501b","Type":"ContainerStarted","Data":"25d54accc3201276704fe98f52086ee3ee402c89fed48b5c5a3dc4e58b0c3b92"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.640632 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.644726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" event={"ID":"1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0","Type":"ContainerStarted","Data":"805e9338895763b03a04f23e00fd543f66c81f015e34b081907b5456f384a8b2"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.644907 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.653630 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" event={"ID":"dc5ebdf2-e54a-4c66-abb7-35039f9226dc","Type":"ContainerStarted","Data":"ad70b6be0ec14941f90d314464ddbab1d92ee955dfe24b19360dcbd135559136"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.654855 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.663224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" event={"ID":"afacf716-028a-4848-a495-83f7c01a47ca","Type":"ContainerStarted","Data":"5cdfe832af67698047e1b655cce0953c0d6bedfb5cbb16ddafb77b840df1a075"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.666157 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.666221 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.666249 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.670044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" event={"ID":"ecc97932-9eae-4d08-910b-b68e0e7d8002","Type":"ContainerStarted","Data":"bacd93944436986e94358aada7cd843b463e7363e637a03886226156a060bb7d"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.670710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.679077 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-hfzwb_ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe/console-operator/0.log" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.679385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" event={"ID":"ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe","Type":"ContainerStarted","Data":"fd8e77e2e51371c0378981bea85a16e3e638424aa7926e2e34695f2de05ab847"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.681655 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.681949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" event={"ID":"718e95dd-fb86-4403-8048-d68f1f23d3ca","Type":"ContainerStarted","Data":"6d7e642329fefaece024aa9ddb05bde03e02c5f389075eeaecddabd995374cc5"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.682375 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.682416 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.682502 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.691818 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" event={"ID":"43fc4c76-a11e-4403-81b5-ee741b3c2a63","Type":"ContainerStarted","Data":"0e63b4b71c8d957094715837cc671ca6db4ec0f09b021aff256ecd61bbb4a6f0"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.692771 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.703807 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51edd34d-2c01-4595-ae73-242d69a16c19-kube-api-access-rhdjt" (OuterVolumeSpecName: "kube-api-access-rhdjt") pod "51edd34d-2c01-4595-ae73-242d69a16c19" (UID: "51edd34d-2c01-4595-ae73-242d69a16c19"). InnerVolumeSpecName "kube-api-access-rhdjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.709887 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" event={"ID":"64a70e69-432d-4ddc-8eef-e16f4e374c56","Type":"ContainerStarted","Data":"2636dbf13009b953bf94b53d3ee95b3ea5e2dc7f526f63ef394ce22c10c5188b"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.711160 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.727863 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" event={"ID":"85d59f34-51a3-4c41-836e-9cc32f5da5e4","Type":"ContainerStarted","Data":"2caafb6236ce38b05da36e76028188aa963ee08f5154aa365c3fa5e72ebccb3c"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.728092 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.745485 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.745594 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.750806 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhdjt\" (UniqueName: \"kubernetes.io/projected/51edd34d-2c01-4595-ae73-242d69a16c19-kube-api-access-rhdjt\") on node \"crc\" DevicePath \"\"" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.773303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" event={"ID":"2bc5989f-becf-4c6b-87ac-89e327bd07b6","Type":"ContainerStarted","Data":"a6874e487e1f4afec30ac4a73a20a6a4cf17e80fa95d1d6c8989b8779d216d47"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.773430 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.774983 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" event={"ID":"f865d3a8-d05b-47c7-a131-31849e5d82ad","Type":"ContainerStarted","Data":"62f594d4be9401b8ea6c29f0465519858af4b612ccd2c071374dfe46fa19dbc6"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.776550 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.785236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" event={"ID":"a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7","Type":"ContainerStarted","Data":"932990a70689c88bc649254a6230fb872f37b5b736b76aa6187503358653aa4f"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.786585 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.792234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" event={"ID":"90a35858-7aa2-450f-af1f-9686c8be3863","Type":"ContainerStarted","Data":"539b1a91c0835cea31fe063fa8dfce6eeb626f23eff3bbce38183317a1691f8e"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.793504 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.797683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" event={"ID":"51edd34d-2c01-4595-ae73-242d69a16c19","Type":"ContainerDied","Data":"01f547bac8899084afcba268df67da963c84b982346081a133b4d5842874fab8"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.798033 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555598-w5bf7" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.798538 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f547bac8899084afcba268df67da963c84b982346081a133b4d5842874fab8" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.805512 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerStarted","Data":"91f2f09cf7deb698c625fcc967e23710cd59c89615a078e7c108c6826a221a38"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.811337 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" event={"ID":"0fc40919-64b1-4b8c-ab92-b9297cb5c352","Type":"ContainerStarted","Data":"9dc1b580437df7e13cba558068c3f1196cb04be3a0836c25b6fb845d14b05351"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.811537 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.816475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" event={"ID":"63f537cc-6a26-4a05-9b17-80549297e9f2","Type":"ContainerStarted","Data":"6df249e9dbc025be24092bdd8d1e97247f18e7ae945df6c7a011919c26f6ac05"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.816884 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.819004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" event={"ID":"066b8087-d58d-4c75-a4bb-4a4b26710855","Type":"ContainerStarted","Data":"43438074e3d83582127b8f444ccad188957286d358662fa7fe01e96529042fed"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.819959 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.838305 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" event={"ID":"7569aa35-67ce-43f4-8e4c-f851973745d9","Type":"ContainerStarted","Data":"604fa8879db625581d689a0d81784de5b61475d264497ed5bc269b5c4f38a492"} Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840437 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840566 4687 patch_prober.go:28] interesting pod/route-controller-manager-86cc6575b9-cgt86 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840579 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840593 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" podUID="bfe3820e-9164-4f99-a317-95c69aa4df0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840608 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kbtm9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.78:8080/healthz\": dial tcp 10.217.0.78:8080: connect: connection refused" start-of-body= Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840667 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" podUID="a1680cce-a286-460f-9e3f-145d9b364995" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.78:8080/healthz\": dial tcp 10.217.0.78:8080: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.840620 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:18:47 crc kubenswrapper[4687]: I0312 17:18:47.863482 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jp88k" Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.286880 4687 trace.go:236] Trace[1238413096]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (12-Mar-2026 17:18:46.514) (total time: 1739ms): Mar 12 17:18:48 crc kubenswrapper[4687]: Trace[1238413096]: [1.73905654s] [1.73905654s] END Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.786634 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.864234 4687 generic.go:334] "Generic (PLEG): container finished" podID="a742c8fb-2af2-4192-bf5a-475f472b323a" containerID="8ecd462191420eb461ff19f9a631dcae15360fcf3606ec918dab0aaa3792744e" exitCode=0 Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.864463 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a742c8fb-2af2-4192-bf5a-475f472b323a","Type":"ContainerDied","Data":"8ecd462191420eb461ff19f9a631dcae15360fcf3606ec918dab0aaa3792744e"} Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.865437 4687 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tbpsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" start-of-body= Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.865476 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" podUID="afacf716-028a-4848-a495-83f7c01a47ca" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.866283 4687 patch_prober.go:28] interesting pod/controller-manager-7c7cd7d4c9-h8rfs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.866328 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" podUID="2bc5989f-becf-4c6b-87ac-89e327bd07b6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.866418 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 17:18:48 crc kubenswrapper[4687]: I0312 17:18:48.866446 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 17:18:49 crc kubenswrapper[4687]: E0312 17:18:49.829780 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 12 17:18:49 crc kubenswrapper[4687]: E0312 17:18:49.831506 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 12 17:18:49 crc kubenswrapper[4687]: E0312 17:18:49.836959 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 12 17:18:49 crc kubenswrapper[4687]: E0312 17:18:49.837047 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerName="galera" Mar 12 17:18:49 crc kubenswrapper[4687]: I0312 17:18:49.866730 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-2xmr9" Mar 12 17:18:49 crc kubenswrapper[4687]: I0312 17:18:49.879502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a742c8fb-2af2-4192-bf5a-475f472b323a","Type":"ContainerStarted","Data":"b1a4fba5a2d67e266e98ae9a105ff407e6ca013aa238010506cb0b4270312709"} Mar 12 17:18:49 crc kubenswrapper[4687]: I0312 17:18:49.880003 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 17:18:49 crc kubenswrapper[4687]: I0312 17:18:49.880047 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.218050 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.218079 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-hfzwb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.218385 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.218418 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" podUID="ab27a5c0-b9e3-4489-92cf-b0c27fb2f5fe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.237350 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.237404 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.237426 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.237446 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.364374 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Mar 12 17:18:50 crc kubenswrapper[4687]: [+]has-synced ok Mar 12 17:18:50 crc kubenswrapper[4687]: [-]process-running failed: reason withheld Mar 12 17:18:50 crc kubenswrapper[4687]: healthz check failed Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.364453 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.895782 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2f2ec7e-fcd2-4749-9f00-ffe100081b84" containerID="4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338" exitCode=0 Mar 12 17:18:50 crc kubenswrapper[4687]: I0312 17:18:50.895874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2f2ec7e-fcd2-4749-9f00-ffe100081b84","Type":"ContainerDied","Data":"4a5291d6ba521d52995ca207b3acb6a453e7f109de09750d85bc64ce2bce5338"} Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.062334 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.062412 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.123437 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.141948 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" podUID="43fc4c76-a11e-4403-81b5-ee741b3c2a63" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": dial tcp 10.217.0.103:8081: connect: connection refused" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.258370 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.396971 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6vp4" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.398491 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d9zff" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.487405 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.487652 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.800786 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-b584c959d-dtlbw" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.926507 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-55qf7_fcbdc66f-c79c-4a57-a030-665f5320b182/router/0.log" Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.926594 4687 generic.go:334] "Generic (PLEG): container finished" podID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerID="402a799402a1ca727253a54ae7b6e6b2357127c4960de5bb4c7d0bd9d16ebfab" exitCode=137 Mar 12 17:18:51 crc kubenswrapper[4687]: I0312 17:18:51.927943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-55qf7" event={"ID":"fcbdc66f-c79c-4a57-a030-665f5320b182","Type":"ContainerDied","Data":"402a799402a1ca727253a54ae7b6e6b2357127c4960de5bb4c7d0bd9d16ebfab"} Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.075419 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.075963 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.085593 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-9c6b6d984-8k59p" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.162652 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.227767 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-6zsz9" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.479445 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.479767 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.479456 4687 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qkz6l container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.479816 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" podUID="3dd72ea9-e99c-4d99-914c-6984e54ee89f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.515786 4687 patch_prober.go:28] interesting pod/loki-operator-controller-manager-56bfd9f789-6lvcv container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.1.70:8081/readyz\": dial tcp 10.217.1.70:8081: connect: connection refused" start-of-body= Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.515841 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" podUID="4568a6b6-c008-4ea7-abec-b824324732d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.1.70:8081/readyz\": dial tcp 10.217.1.70:8081: connect: connection refused" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.572546 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4xd8n" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.661063 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-hq4lb" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.776658 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kbtm9" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.779974 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:18:52 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:18:52 crc kubenswrapper[4687]: > Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.941610 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-55qf7_fcbdc66f-c79c-4a57-a030-665f5320b182/router/0.log" Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.943675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-55qf7" event={"ID":"fcbdc66f-c79c-4a57-a030-665f5320b182","Type":"ContainerStarted","Data":"89e17ff72a7787600d7dc5133dfa86b7391e1c0dd1e17e1580b36ce1320cbf72"} Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.946708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2f2ec7e-fcd2-4749-9f00-ffe100081b84","Type":"ContainerStarted","Data":"f63f33e754a48f4c87cee148748ffbf55c59950a9fe48762e26fac35ff94ccac"} Mar 12 17:18:52 crc kubenswrapper[4687]: I0312 17:18:52.983274 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.023725 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l6tqh" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.338755 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.340388 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.340526 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.425628 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" podUID="af6289c5-2a9a-4429-96d6-3c7bbff706e0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": dial tcp 10.217.0.106:8081: connect: connection refused" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.456324 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.468064 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-vqhcv" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.479418 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-nq67j" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.575999 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" podUID="718e95dd-fb86-4403-8048-d68f1f23d3ca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": dial tcp 10.217.0.109:8081: connect: connection refused" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.671334 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" podUID="7569aa35-67ce-43f4-8e4c-f851973745d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": dial tcp 10.217.0.111:8081: connect: connection refused" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.733374 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:18:53 crc kubenswrapper[4687]: E0312 17:18:53.733837 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.749511 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-dx9rg" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.763627 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-68xmx" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.862943 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-wsq7q" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.891088 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-cds6d" Mar 12 17:18:53 crc kubenswrapper[4687]: I0312 17:18:53.891970 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-hmn45" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.022548 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-kjnx8" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.103651 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-4rpv5" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.105005 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-str9n" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.156861 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.156924 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.158299 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c5372e7800e6cb562729bfd14e6d267da1c44ca0c8855028bd659a66a0367158"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.158350 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" containerID="cri-o://c5372e7800e6cb562729bfd14e6d267da1c44ca0c8855028bd659a66a0367158" gracePeriod=30 Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.283096 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-d9k4b" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.309099 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-v98w4" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.340417 4687 patch_prober.go:28] interesting pod/router-default-5444994796-55qf7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.340463 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-55qf7" podUID="fcbdc66f-c79c-4a57-a030-665f5320b182" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.348678 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-748fccb5bd-pgbh8" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.412276 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-ft9mp" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.473927 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-hngln" Mar 12 17:18:54 crc kubenswrapper[4687]: I0312 17:18:54.534836 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jrgss" Mar 12 17:18:55 crc kubenswrapper[4687]: I0312 17:18:55.246392 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c7cd7d4c9-h8rfs" Mar 12 17:18:55 crc kubenswrapper[4687]: I0312 17:18:55.357089 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 17:18:55 crc kubenswrapper[4687]: I0312 17:18:55.432773 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-7cchw" Mar 12 17:18:55 crc kubenswrapper[4687]: I0312 17:18:55.502694 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qkz6l" Mar 12 17:18:55 crc kubenswrapper[4687]: I0312 17:18:55.849565 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz" Mar 12 17:18:56 crc kubenswrapper[4687]: I0312 17:18:56.055401 4687 generic.go:334] "Generic (PLEG): container finished" podID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerID="9f338c064484e7fe00f8ae2266ee1d3ba072289aae7c6a57467f1cbe9882afca" exitCode=0 Mar 12 17:18:56 crc kubenswrapper[4687]: I0312 17:18:56.055482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerDied","Data":"9f338c064484e7fe00f8ae2266ee1d3ba072289aae7c6a57467f1cbe9882afca"} Mar 12 17:18:56 crc kubenswrapper[4687]: I0312 17:18:56.055746 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 17:18:56 crc kubenswrapper[4687]: I0312 17:18:56.062325 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-55qf7" Mar 12 17:18:56 crc kubenswrapper[4687]: I0312 17:18:56.254698 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86cc6575b9-cgt86" Mar 12 17:18:56 crc kubenswrapper[4687]: I0312 17:18:56.391436 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-dbdf4d967-glnf2" Mar 12 17:18:57 crc kubenswrapper[4687]: I0312 17:18:57.068468 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerStarted","Data":"a9de538fcff48439242be7c371239a555470e952a3a01ce2563c6c69b1b496da"} Mar 12 17:18:57 crc kubenswrapper[4687]: I0312 17:18:57.085524 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9g44w" podStartSLOduration=3.988875745 podStartE2EDuration="57.085500971s" podCreationTimestamp="2026-03-12 17:18:00 +0000 UTC" firstStartedPulling="2026-03-12 17:18:03.455158623 +0000 UTC m=+4532.419120967" lastFinishedPulling="2026-03-12 17:18:56.551783849 +0000 UTC m=+4585.515746193" observedRunningTime="2026-03-12 17:18:57.0839824 +0000 UTC m=+4586.047944744" watchObservedRunningTime="2026-03-12 17:18:57.085500971 +0000 UTC m=+4586.049463315" Mar 12 17:18:57 crc kubenswrapper[4687]: I0312 17:18:57.493289 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tbpsw" Mar 12 17:18:59 crc kubenswrapper[4687]: I0312 17:18:59.822345 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 17:18:59 crc kubenswrapper[4687]: I0312 17:18:59.823965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 17:19:00 crc kubenswrapper[4687]: I0312 17:19:00.237104 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:00 crc kubenswrapper[4687]: I0312 17:19:00.237165 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:00 crc kubenswrapper[4687]: I0312 17:19:00.237105 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:00 crc kubenswrapper[4687]: I0312 17:19:00.237542 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:00 crc kubenswrapper[4687]: I0312 17:19:00.246420 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hfzwb" Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.111511 4687 generic.go:334] "Generic (PLEG): container finished" podID="0ba053ef-190c-4642-ac17-9876798b2390" containerID="c5372e7800e6cb562729bfd14e6d267da1c44ca0c8855028bd659a66a0367158" exitCode=0 Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.111585 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba053ef-190c-4642-ac17-9876798b2390","Type":"ContainerDied","Data":"c5372e7800e6cb562729bfd14e6d267da1c44ca0c8855028bd659a66a0367158"} Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.143387 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-568548c879-hsj4g" Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.276473 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.276710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.374532 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6c7b658b6f-wgtnl" podUID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerName="console" containerID="cri-o://5cd606511b68f2e1e6cf249862a9f03a5baf74b6810394414fc5431f837a1c6c" gracePeriod=14 Mar 12 17:19:01 crc kubenswrapper[4687]: I0312 17:19:01.374650 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" containerID="cri-o://a8a1a73f16889133cda48ef8456502d7c2a9e455a0098684eb7c44f6e43592a7" gracePeriod=15 Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.136998 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c7b658b6f-wgtnl_19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f/console/0.log" Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.137312 4687 generic.go:334] "Generic (PLEG): container finished" podID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerID="5cd606511b68f2e1e6cf249862a9f03a5baf74b6810394414fc5431f837a1c6c" exitCode=2 Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.137403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c7b658b6f-wgtnl" event={"ID":"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f","Type":"ContainerDied","Data":"5cd606511b68f2e1e6cf249862a9f03a5baf74b6810394414fc5431f837a1c6c"} Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.140038 4687 generic.go:334] "Generic (PLEG): container finished" podID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerID="a8a1a73f16889133cda48ef8456502d7c2a9e455a0098684eb7c44f6e43592a7" exitCode=0 Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.140114 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" event={"ID":"b1d878fd-d9a8-4044-9fce-70e660b7fcad","Type":"ContainerDied","Data":"a8a1a73f16889133cda48ef8456502d7c2a9e455a0098684eb7c44f6e43592a7"} Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.243060 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:02 crc kubenswrapper[4687]: > Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.400072 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:02 crc kubenswrapper[4687]: > Mar 12 17:19:02 crc kubenswrapper[4687]: I0312 17:19:02.521495 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-56bfd9f789-6lvcv" Mar 12 17:19:03 crc kubenswrapper[4687]: I0312 17:19:03.156931 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c7b658b6f-wgtnl_19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f/console/0.log" Mar 12 17:19:03 crc kubenswrapper[4687]: I0312 17:19:03.157308 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c7b658b6f-wgtnl" event={"ID":"19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f","Type":"ContainerStarted","Data":"158e8fc84985920535f620ec230057cbf9c4feaedf34b6638fad6f10b5207e9a"} Mar 12 17:19:03 crc kubenswrapper[4687]: I0312 17:19:03.427586 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9rsqr" Mar 12 17:19:03 crc kubenswrapper[4687]: I0312 17:19:03.577214 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-d8j97" Mar 12 17:19:03 crc kubenswrapper[4687]: I0312 17:19:03.672732 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-8jrnk" Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.174046 4687 generic.go:334] "Generic (PLEG): container finished" podID="401d4f9b-896e-4926-91ef-c90b5c38ef83" containerID="5919815d56a401871d9c84447b563fbdaffc95d5c02dc6288d0653eca4294e21" exitCode=1 Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.174318 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"401d4f9b-896e-4926-91ef-c90b5c38ef83","Type":"ContainerDied","Data":"5919815d56a401871d9c84447b563fbdaffc95d5c02dc6288d0653eca4294e21"} Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.177902 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" event={"ID":"b1d878fd-d9a8-4044-9fce-70e660b7fcad","Type":"ContainerStarted","Data":"149efcb50d42bbdaa3708a703a6fc96ca83b22e61ef66105dae999347175bfca"} Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.178294 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.178339 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.806025 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.806104 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.806139 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.968144 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.968480 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.968612 4687 patch_prober.go:28] interesting pod/console-6c7b658b6f-wgtnl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.141:8443/health\": dial tcp 10.217.0.141:8443: connect: connection refused" start-of-body= Mar 12 17:19:04 crc kubenswrapper[4687]: I0312 17:19:04.968679 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6c7b658b6f-wgtnl" podUID="19b5ecc0-df37-4bf5-b4a4-3bdca5826d2f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.141:8443/health\": dial tcp 10.217.0.141:8443: connect: connection refused" Mar 12 17:19:05 crc kubenswrapper[4687]: I0312 17:19:05.189088 4687 patch_prober.go:28] interesting pod/oauth-openshift-85df6bd6d5-qdxtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Mar 12 17:19:05 crc kubenswrapper[4687]: I0312 17:19:05.189141 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" podUID="b1d878fd-d9a8-4044-9fce-70e660b7fcad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.228450 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85df6bd6d5-qdxtd" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.639893 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.732954 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:19:06 crc kubenswrapper[4687]: E0312 17:19:06.733660 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.734413 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config-secret\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.734513 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-config-data\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.734561 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.734784 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.734954 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/401d4f9b-896e-4926-91ef-c90b5c38ef83-kube-api-access-qsw2p\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.735053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ssh-key\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.735128 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-temporary\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.735184 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ca-certs\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.735277 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-workdir\") pod \"401d4f9b-896e-4926-91ef-c90b5c38ef83\" (UID: \"401d4f9b-896e-4926-91ef-c90b5c38ef83\") " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.739334 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.743983 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.745889 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-config-data" (OuterVolumeSpecName: "config-data") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.758521 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.758591 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401d4f9b-896e-4926-91ef-c90b5c38ef83-kube-api-access-qsw2p" (OuterVolumeSpecName: "kube-api-access-qsw2p") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "kube-api-access-qsw2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.780072 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.798706 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.802091 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.815253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "401d4f9b-896e-4926-91ef-c90b5c38ef83" (UID: "401d4f9b-896e-4926-91ef-c90b5c38ef83"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840015 4687 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840042 4687 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840054 4687 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/401d4f9b-896e-4926-91ef-c90b5c38ef83-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840064 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840074 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840082 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/401d4f9b-896e-4926-91ef-c90b5c38ef83-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840122 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840131 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/401d4f9b-896e-4926-91ef-c90b5c38ef83-kube-api-access-qsw2p\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.840139 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/401d4f9b-896e-4926-91ef-c90b5c38ef83-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.865178 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 12 17:19:06 crc kubenswrapper[4687]: I0312 17:19:06.942563 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:07 crc kubenswrapper[4687]: I0312 17:19:07.228293 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0ba053ef-190c-4642-ac17-9876798b2390","Type":"ContainerStarted","Data":"7505439468eb046233a0bec5731094561e012a910515ef7ad8963c2de4b042ea"} Mar 12 17:19:07 crc kubenswrapper[4687]: I0312 17:19:07.229929 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 17:19:07 crc kubenswrapper[4687]: I0312 17:19:07.229931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"401d4f9b-896e-4926-91ef-c90b5c38ef83","Type":"ContainerDied","Data":"3049f2235a9bb52b55635db7eea09abdf0c0a4288a07ee7e0417996aba7dc70c"} Mar 12 17:19:07 crc kubenswrapper[4687]: I0312 17:19:07.230918 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3049f2235a9bb52b55635db7eea09abdf0c0a4288a07ee7e0417996aba7dc70c" Mar 12 17:19:08 crc kubenswrapper[4687]: I0312 17:19:08.574419 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:08 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:08 crc kubenswrapper[4687]: > Mar 12 17:19:08 crc kubenswrapper[4687]: I0312 17:19:08.582555 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:08 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:08 crc kubenswrapper[4687]: > Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.236984 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.237058 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.237103 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.237102 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.237154 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.238067 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4e15f45c9701823172bb976245033dc044bd4b55bdf8f5630a9980467fd3a5f7"} pod="openshift-console/downloads-7954f5f757-crmcv" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.238086 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.238099 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" containerID="cri-o://4e15f45c9701823172bb976245033dc044bd4b55bdf8f5630a9980467fd3a5f7" gracePeriod=2 Mar 12 17:19:10 crc kubenswrapper[4687]: I0312 17:19:10.238109 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.302681 4687 generic.go:334] "Generic (PLEG): container finished" podID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerID="4e15f45c9701823172bb976245033dc044bd4b55bdf8f5630a9980467fd3a5f7" exitCode=0 Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.302769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-crmcv" event={"ID":"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8","Type":"ContainerDied","Data":"4e15f45c9701823172bb976245033dc044bd4b55bdf8f5630a9980467fd3a5f7"} Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.303694 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-crmcv" event={"ID":"f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8","Type":"ContainerStarted","Data":"7363d1c21578f3fc7df34424ff5592a328a541956e185e84f4a7a3df0f4cf3e0"} Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.303747 4687 scope.go:117] "RemoveContainer" containerID="f7b0b853c1333dc4fae6bb0a7fdfafa94ee5511f908f6305a91173e9883fe190" Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.305011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.305389 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.305452 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.310874 4687 generic.go:334] "Generic (PLEG): container finished" podID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerID="1abb5a0d54b93d0d6e19e379d5ec06228118f4f5f61129efa50204f956096060" exitCode=137 Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.310922 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerDied","Data":"1abb5a0d54b93d0d6e19e379d5ec06228118f4f5f61129efa50204f956096060"} Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.515120 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-546598f745-bbcrq" Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.961837 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 17:19:11 crc kubenswrapper[4687]: I0312 17:19:11.993890 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:19:12 crc kubenswrapper[4687]: I0312 17:19:12.246300 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:12 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:12 crc kubenswrapper[4687]: > Mar 12 17:19:12 crc kubenswrapper[4687]: I0312 17:19:12.332089 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:12 crc kubenswrapper[4687]: I0312 17:19:12.332152 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:12 crc kubenswrapper[4687]: I0312 17:19:12.358824 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:12 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:12 crc kubenswrapper[4687]: > Mar 12 17:19:13 crc kubenswrapper[4687]: I0312 17:19:13.345742 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"5d7315a7bc4d6114760e7143c4e1c6627e200fb80e90f00b1b7eb60c08a8fd0d"} Mar 12 17:19:15 crc kubenswrapper[4687]: I0312 17:19:15.058274 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:19:15 crc kubenswrapper[4687]: I0312 17:19:15.062125 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c7b658b6f-wgtnl" Mar 12 17:19:16 crc kubenswrapper[4687]: I0312 17:19:16.998102 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.655912 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 17:19:17 crc kubenswrapper[4687]: E0312 17:19:17.683916 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51edd34d-2c01-4595-ae73-242d69a16c19" containerName="oc" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.684529 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="51edd34d-2c01-4595-ae73-242d69a16c19" containerName="oc" Mar 12 17:19:17 crc kubenswrapper[4687]: E0312 17:19:17.684606 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401d4f9b-896e-4926-91ef-c90b5c38ef83" containerName="tempest-tests-tempest-tests-runner" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.684615 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="401d4f9b-896e-4926-91ef-c90b5c38ef83" containerName="tempest-tests-tempest-tests-runner" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.687132 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="401d4f9b-896e-4926-91ef-c90b5c38ef83" containerName="tempest-tests-tempest-tests-runner" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.687180 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="51edd34d-2c01-4595-ae73-242d69a16c19" containerName="oc" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.690705 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.697594 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bvmm5" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.750743 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.820017 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjfs\" (UniqueName: \"kubernetes.io/projected/78208d69-5957-4700-a77e-3da993e19acd-kube-api-access-qkjfs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.820559 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.922717 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjfs\" (UniqueName: \"kubernetes.io/projected/78208d69-5957-4700-a77e-3da993e19acd-kube-api-access-qkjfs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.922886 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.923454 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.991207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjfs\" (UniqueName: \"kubernetes.io/projected/78208d69-5957-4700-a77e-3da993e19acd-kube-api-access-qkjfs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:17 crc kubenswrapper[4687]: I0312 17:19:17.996171 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78208d69-5957-4700-a77e-3da993e19acd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:18 crc kubenswrapper[4687]: I0312 17:19:18.136695 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 17:19:18 crc kubenswrapper[4687]: I0312 17:19:18.733926 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:19:18 crc kubenswrapper[4687]: E0312 17:19:18.734884 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:19:19 crc kubenswrapper[4687]: I0312 17:19:19.723842 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555592-8s27b"] Mar 12 17:19:19 crc kubenswrapper[4687]: I0312 17:19:19.747169 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555592-8s27b"] Mar 12 17:19:19 crc kubenswrapper[4687]: I0312 17:19:19.792705 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48185718-1b64-44ea-9e80-a365e6b32303" path="/var/lib/kubelet/pods/48185718-1b64-44ea-9e80-a365e6b32303/volumes" Mar 12 17:19:19 crc kubenswrapper[4687]: I0312 17:19:19.871271 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 17:19:19 crc kubenswrapper[4687]: I0312 17:19:19.896608 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 17:19:20 crc kubenswrapper[4687]: I0312 17:19:20.238083 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:20 crc kubenswrapper[4687]: I0312 17:19:20.238113 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:20 crc kubenswrapper[4687]: I0312 17:19:20.238141 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:20 crc kubenswrapper[4687]: I0312 17:19:20.238165 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:20 crc kubenswrapper[4687]: I0312 17:19:20.434011 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78208d69-5957-4700-a77e-3da993e19acd","Type":"ContainerStarted","Data":"dbe599fa1c880c2adbc00fe0d160cf249dee486b356dd0df2aac8edc6e354e80"} Mar 12 17:19:21 crc kubenswrapper[4687]: I0312 17:19:21.419710 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2dm8z" Mar 12 17:19:21 crc kubenswrapper[4687]: I0312 17:19:21.622641 4687 trace.go:236] Trace[663092509]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-lbgkx" (12-Mar-2026 17:19:20.341) (total time: 1281ms): Mar 12 17:19:21 crc kubenswrapper[4687]: Trace[663092509]: [1.281241751s] [1.281241751s] END Mar 12 17:19:21 crc kubenswrapper[4687]: I0312 17:19:21.983610 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:19:22 crc kubenswrapper[4687]: I0312 17:19:22.285481 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:22 crc kubenswrapper[4687]: > Mar 12 17:19:22 crc kubenswrapper[4687]: I0312 17:19:22.343350 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:22 crc kubenswrapper[4687]: > Mar 12 17:19:22 crc kubenswrapper[4687]: I0312 17:19:22.467251 4687 generic.go:334] "Generic (PLEG): container finished" podID="8955cbff-df56-4d9d-8191-6863b7fde61e" containerID="6d6232cb7cfaaf007506aaa14e4009faed5cad41b25ea5e933c0c96e25368c04" exitCode=1 Mar 12 17:19:22 crc kubenswrapper[4687]: I0312 17:19:22.467303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerDied","Data":"6d6232cb7cfaaf007506aaa14e4009faed5cad41b25ea5e933c0c96e25368c04"} Mar 12 17:19:22 crc kubenswrapper[4687]: I0312 17:19:22.468614 4687 scope.go:117] "RemoveContainer" containerID="6d6232cb7cfaaf007506aaa14e4009faed5cad41b25ea5e933c0c96e25368c04" Mar 12 17:19:24 crc kubenswrapper[4687]: I0312 17:19:24.492744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dtxpv" event={"ID":"8955cbff-df56-4d9d-8191-6863b7fde61e","Type":"ContainerStarted","Data":"356cc539ec3e04c722a327ddf9c5c36de507dc4087c497d61f72a15324113a38"} Mar 12 17:19:24 crc kubenswrapper[4687]: I0312 17:19:24.494611 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78208d69-5957-4700-a77e-3da993e19acd","Type":"ContainerStarted","Data":"b9086feaa4e3868f876b20b81e66b8ca3f779c9c89fca6960019f4de8aefb46c"} Mar 12 17:19:24 crc kubenswrapper[4687]: I0312 17:19:24.541160 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=4.831653146 podStartE2EDuration="7.541140992s" podCreationTimestamp="2026-03-12 17:19:17 +0000 UTC" firstStartedPulling="2026-03-12 17:19:19.896098941 +0000 UTC m=+4608.860061275" lastFinishedPulling="2026-03-12 17:19:22.605586777 +0000 UTC m=+4611.569549121" observedRunningTime="2026-03-12 17:19:24.540907636 +0000 UTC m=+4613.504869980" watchObservedRunningTime="2026-03-12 17:19:24.541140992 +0000 UTC m=+4613.505103336" Mar 12 17:19:27 crc kubenswrapper[4687]: I0312 17:19:27.049263 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:19:29 crc kubenswrapper[4687]: I0312 17:19:29.737173 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:19:29 crc kubenswrapper[4687]: E0312 17:19:29.738442 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.072599 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.075483 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="sg-core" containerID="cri-o://ea53b9ca3f6bca1e34c53af17d1b75ec8028f5f493be6cdacd088905cacd63e5" gracePeriod=30 Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.075523 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-notification-agent" containerID="cri-o://e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42" gracePeriod=30 Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.075490 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="proxy-httpd" containerID="cri-o://fb295708edbb2f5907b5c1b3de1a374d998066ad455cfc52bb7791d984a321e3" gracePeriod=30 Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.075496 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" containerID="cri-o://91f2f09cf7deb698c625fcc967e23710cd59c89615a078e7c108c6826a221a38" gracePeriod=30 Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.237105 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.237136 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-crmcv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.237163 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.237188 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-crmcv" podUID="f62af0c9-1bcc-4f36-bed0-85bf8a22f4a8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.608138 4687 generic.go:334] "Generic (PLEG): container finished" podID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerID="ea53b9ca3f6bca1e34c53af17d1b75ec8028f5f493be6cdacd088905cacd63e5" exitCode=2 Mar 12 17:19:30 crc kubenswrapper[4687]: I0312 17:19:30.608212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerDied","Data":"ea53b9ca3f6bca1e34c53af17d1b75ec8028f5f493be6cdacd088905cacd63e5"} Mar 12 17:19:31 crc kubenswrapper[4687]: I0312 17:19:31.623593 4687 generic.go:334] "Generic (PLEG): container finished" podID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerID="91f2f09cf7deb698c625fcc967e23710cd59c89615a078e7c108c6826a221a38" exitCode=0 Mar 12 17:19:31 crc kubenswrapper[4687]: I0312 17:19:31.623660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerDied","Data":"91f2f09cf7deb698c625fcc967e23710cd59c89615a078e7c108c6826a221a38"} Mar 12 17:19:31 crc kubenswrapper[4687]: I0312 17:19:31.623911 4687 scope.go:117] "RemoveContainer" containerID="2aa27c071ee14d55ab24cea39beeae927da8dff3e369144a2731dff4c1cfbb59" Mar 12 17:19:31 crc kubenswrapper[4687]: I0312 17:19:31.956483 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 17:19:31 crc kubenswrapper[4687]: I0312 17:19:31.978740 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.025242 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0ba053ef-190c-4642-ac17-9876798b2390" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.094832 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.096296 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.263890 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:32 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:32 crc kubenswrapper[4687]: > Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.344598 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:32 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:32 crc kubenswrapper[4687]: > Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.644395 4687 generic.go:334] "Generic (PLEG): container finished" podID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerID="fb295708edbb2f5907b5c1b3de1a374d998066ad455cfc52bb7791d984a321e3" exitCode=0 Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.644429 4687 generic.go:334] "Generic (PLEG): container finished" podID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerID="e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42" exitCode=0 Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.645326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerDied","Data":"fb295708edbb2f5907b5c1b3de1a374d998066ad455cfc52bb7791d984a321e3"} Mar 12 17:19:32 crc kubenswrapper[4687]: I0312 17:19:32.645491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerDied","Data":"e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42"} Mar 12 17:19:32 crc kubenswrapper[4687]: E0312 17:19:32.821904 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68df22d_ffc8_4f99_a009_1133f37d9a67.slice/crio-e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68df22d_ffc8_4f99_a009_1133f37d9a67.slice/crio-conmon-e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42.scope\": RecentStats: unable to find data in memory cache]" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.530197 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656627 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-log-httpd\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-config-data\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656754 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-sg-core-conf-yaml\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656788 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-run-httpd\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656882 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgf97\" (UniqueName: \"kubernetes.io/projected/e68df22d-ffc8-4f99-a009-1133f37d9a67-kube-api-access-cgf97\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656928 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-scripts\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.656985 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-ceilometer-tls-certs\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.657037 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-combined-ca-bundle\") pod \"e68df22d-ffc8-4f99-a009-1133f37d9a67\" (UID: \"e68df22d-ffc8-4f99-a009-1133f37d9a67\") " Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.660058 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e68df22d-ffc8-4f99-a009-1133f37d9a67","Type":"ContainerDied","Data":"4f06b8b856af03ed17d19d54d0665e264bde4a60934bf5edb5e6da0fbf9b1312"} Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.660101 4687 scope.go:117] "RemoveContainer" containerID="91f2f09cf7deb698c625fcc967e23710cd59c89615a078e7c108c6826a221a38" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.660237 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.661609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.662505 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.679807 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-scripts" (OuterVolumeSpecName: "scripts") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.687662 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68df22d-ffc8-4f99-a009-1133f37d9a67-kube-api-access-cgf97" (OuterVolumeSpecName: "kube-api-access-cgf97") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "kube-api-access-cgf97". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.735531 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.762951 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgf97\" (UniqueName: \"kubernetes.io/projected/e68df22d-ffc8-4f99-a009-1133f37d9a67-kube-api-access-cgf97\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.762995 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.763008 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.763021 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.763032 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e68df22d-ffc8-4f99-a009-1133f37d9a67-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.818982 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.819129 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.869000 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.869046 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.919280 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-config-data" (OuterVolumeSpecName: "config-data") pod "e68df22d-ffc8-4f99-a009-1133f37d9a67" (UID: "e68df22d-ffc8-4f99-a009-1133f37d9a67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.940652 4687 scope.go:117] "RemoveContainer" containerID="fb295708edbb2f5907b5c1b3de1a374d998066ad455cfc52bb7791d984a321e3" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.974856 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68df22d-ffc8-4f99-a009-1133f37d9a67-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 17:19:33 crc kubenswrapper[4687]: I0312 17:19:33.984495 4687 scope.go:117] "RemoveContainer" containerID="ea53b9ca3f6bca1e34c53af17d1b75ec8028f5f493be6cdacd088905cacd63e5" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.022737 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.042754 4687 scope.go:117] "RemoveContainer" containerID="e3cb327c7a439f6ea0858b0d273042fb9b24289ef0835513b45224ba5c8bfa42" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.046233 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.101515 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:34 crc kubenswrapper[4687]: E0312 17:19:34.106703 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-notification-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.106743 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-notification-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: E0312 17:19:34.106780 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="sg-core" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.106790 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="sg-core" Mar 12 17:19:34 crc kubenswrapper[4687]: E0312 17:19:34.106830 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.106839 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: E0312 17:19:34.106874 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="proxy-httpd" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.106882 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="proxy-httpd" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.107207 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="sg-core" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.107230 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="proxy-httpd" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.107244 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.107269 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-notification-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: E0312 17:19:34.107586 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.107601 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.107932 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" containerName="ceilometer-central-agent" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.110608 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.114544 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.122151 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.122511 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.122630 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181287 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpzqb\" (UniqueName: \"kubernetes.io/projected/f857165a-722b-4b35-92a8-e8e7b708eeb4-kube-api-access-lpzqb\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181323 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-log-httpd\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181479 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-run-httpd\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181548 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-scripts\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.181703 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-config-data\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-log-httpd\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283594 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-run-httpd\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283633 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-scripts\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-config-data\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.283896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpzqb\" (UniqueName: \"kubernetes.io/projected/f857165a-722b-4b35-92a8-e8e7b708eeb4-kube-api-access-lpzqb\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.285253 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-log-httpd\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.285519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-run-httpd\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.290783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.291719 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-scripts\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.291958 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.292340 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-config-data\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.292765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.310962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpzqb\" (UniqueName: \"kubernetes.io/projected/f857165a-722b-4b35-92a8-e8e7b708eeb4-kube-api-access-lpzqb\") pod \"ceilometer-0\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " pod="openstack/ceilometer-0" Mar 12 17:19:34 crc kubenswrapper[4687]: I0312 17:19:34.485346 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:19:35 crc kubenswrapper[4687]: I0312 17:19:35.234468 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:35 crc kubenswrapper[4687]: I0312 17:19:35.581473 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 17:19:35 crc kubenswrapper[4687]: I0312 17:19:35.703803 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerStarted","Data":"c5d934d690cc3d8d6058bf2bd18fd4462e00d493c35cb464544048e03a5b2f62"} Mar 12 17:19:35 crc kubenswrapper[4687]: I0312 17:19:35.756498 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68df22d-ffc8-4f99-a009-1133f37d9a67" path="/var/lib/kubelet/pods/e68df22d-ffc8-4f99-a009-1133f37d9a67/volumes" Mar 12 17:19:36 crc kubenswrapper[4687]: I0312 17:19:36.719572 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerStarted","Data":"a062a9b1bcb50af1be4034c161b6d2660ad366b9e8932a8c9f0029f9e85707dd"} Mar 12 17:19:37 crc kubenswrapper[4687]: I0312 17:19:37.021893 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 17:19:37 crc kubenswrapper[4687]: I0312 17:19:37.754150 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerStarted","Data":"7ec748c70c33e32b0e4a25f4dc5274bf1e3409304ffb22f9025e85185b13057d"} Mar 12 17:19:37 crc kubenswrapper[4687]: I0312 17:19:37.754475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerStarted","Data":"19dc17f0d806a377e7dfc42eab2af486ef59dde79564e1f0eb8e88666cf60d88"} Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.242313 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.759237 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerStarted","Data":"ea41b31514deb9be29b403ce59a185193818bc6f36c2f6d64a8ff42713ef505b"} Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.759638 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-central-agent" containerID="cri-o://a062a9b1bcb50af1be4034c161b6d2660ad366b9e8932a8c9f0029f9e85707dd" gracePeriod=30 Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.759670 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="proxy-httpd" containerID="cri-o://ea41b31514deb9be29b403ce59a185193818bc6f36c2f6d64a8ff42713ef505b" gracePeriod=30 Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.759714 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="sg-core" containerID="cri-o://19dc17f0d806a377e7dfc42eab2af486ef59dde79564e1f0eb8e88666cf60d88" gracePeriod=30 Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.759740 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-notification-agent" containerID="cri-o://7ec748c70c33e32b0e4a25f4dc5274bf1e3409304ffb22f9025e85185b13057d" gracePeriod=30 Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.760007 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 17:19:39 crc kubenswrapper[4687]: I0312 17:19:39.795665 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7895745650000001 podStartE2EDuration="5.795640681s" podCreationTimestamp="2026-03-12 17:19:34 +0000 UTC" firstStartedPulling="2026-03-12 17:19:35.252486951 +0000 UTC m=+4624.216449285" lastFinishedPulling="2026-03-12 17:19:39.258553057 +0000 UTC m=+4628.222515401" observedRunningTime="2026-03-12 17:19:39.788791314 +0000 UTC m=+4628.752753658" watchObservedRunningTime="2026-03-12 17:19:39.795640681 +0000 UTC m=+4628.759603025" Mar 12 17:19:40 crc kubenswrapper[4687]: I0312 17:19:40.259212 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-crmcv" Mar 12 17:19:40 crc kubenswrapper[4687]: I0312 17:19:40.794517 4687 generic.go:334] "Generic (PLEG): container finished" podID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerID="19dc17f0d806a377e7dfc42eab2af486ef59dde79564e1f0eb8e88666cf60d88" exitCode=2 Mar 12 17:19:40 crc kubenswrapper[4687]: I0312 17:19:40.794577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerDied","Data":"19dc17f0d806a377e7dfc42eab2af486ef59dde79564e1f0eb8e88666cf60d88"} Mar 12 17:19:42 crc kubenswrapper[4687]: I0312 17:19:42.241081 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:42 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:42 crc kubenswrapper[4687]: > Mar 12 17:19:42 crc kubenswrapper[4687]: I0312 17:19:42.241650 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:19:42 crc kubenswrapper[4687]: I0312 17:19:42.244253 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"d657b183fae01432ff86dd2f57d88fb3f40d952a7f853039c531bbe8a5d0cc6b"} pod="openshift-marketplace/certified-operators-w6wwd" containerMessage="Container registry-server failed startup probe, will be restarted" Mar 12 17:19:42 crc kubenswrapper[4687]: I0312 17:19:42.244301 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" containerID="cri-o://d657b183fae01432ff86dd2f57d88fb3f40d952a7f853039c531bbe8a5d0cc6b" gracePeriod=30 Mar 12 17:19:42 crc kubenswrapper[4687]: I0312 17:19:42.358589 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:42 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:42 crc kubenswrapper[4687]: > Mar 12 17:19:44 crc kubenswrapper[4687]: I0312 17:19:44.733617 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:19:45 crc kubenswrapper[4687]: I0312 17:19:45.888945 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"40ba0773921c7a67c712c495adb50129bda16d44f774878ddc552da57f9ef650"} Mar 12 17:19:46 crc kubenswrapper[4687]: I0312 17:19:46.904832 4687 generic.go:334] "Generic (PLEG): container finished" podID="08357120-f2bb-4324-ade8-a019ed106514" containerID="d657b183fae01432ff86dd2f57d88fb3f40d952a7f853039c531bbe8a5d0cc6b" exitCode=0 Mar 12 17:19:46 crc kubenswrapper[4687]: I0312 17:19:46.904912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerDied","Data":"d657b183fae01432ff86dd2f57d88fb3f40d952a7f853039c531bbe8a5d0cc6b"} Mar 12 17:19:47 crc kubenswrapper[4687]: I0312 17:19:47.968943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerStarted","Data":"af86c5dbe28e2e332451abeae36d7da75eae55e37a2383a5b2f0c8283655c1fc"} Mar 12 17:19:51 crc kubenswrapper[4687]: I0312 17:19:51.180388 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:19:51 crc kubenswrapper[4687]: I0312 17:19:51.180906 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:19:53 crc kubenswrapper[4687]: I0312 17:19:53.173539 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:53 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:53 crc kubenswrapper[4687]: > Mar 12 17:19:53 crc kubenswrapper[4687]: I0312 17:19:53.189040 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:19:53 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:19:53 crc kubenswrapper[4687]: > Mar 12 17:19:58 crc kubenswrapper[4687]: I0312 17:19:58.454683 4687 scope.go:117] "RemoveContainer" containerID="86fd0a76f1850c76f237b92d48d7db7404db1156431e7c7435df67ce9e783a05" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.188819 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555600-c69vv"] Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.199538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.204025 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.204229 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.204341 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.227063 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555600-c69vv"] Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.285201 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkd84\" (UniqueName: \"kubernetes.io/projected/50d22889-e03c-4ae1-affd-9829014a347e-kube-api-access-pkd84\") pod \"auto-csr-approver-29555600-c69vv\" (UID: \"50d22889-e03c-4ae1-affd-9829014a347e\") " pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.387910 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkd84\" (UniqueName: \"kubernetes.io/projected/50d22889-e03c-4ae1-affd-9829014a347e-kube-api-access-pkd84\") pod \"auto-csr-approver-29555600-c69vv\" (UID: \"50d22889-e03c-4ae1-affd-9829014a347e\") " pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.412501 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkd84\" (UniqueName: \"kubernetes.io/projected/50d22889-e03c-4ae1-affd-9829014a347e-kube-api-access-pkd84\") pod \"auto-csr-approver-29555600-c69vv\" (UID: \"50d22889-e03c-4ae1-affd-9829014a347e\") " pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:00 crc kubenswrapper[4687]: I0312 17:20:00.526899 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:01 crc kubenswrapper[4687]: I0312 17:20:01.451049 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555600-c69vv"] Mar 12 17:20:01 crc kubenswrapper[4687]: W0312 17:20:01.460863 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50d22889_e03c_4ae1_affd_9829014a347e.slice/crio-1e16441a5192e25de94cdd679400e887a48dcbd001f6514b722819de739ebc29 WatchSource:0}: Error finding container 1e16441a5192e25de94cdd679400e887a48dcbd001f6514b722819de739ebc29: Status 404 returned error can't find the container with id 1e16441a5192e25de94cdd679400e887a48dcbd001f6514b722819de739ebc29 Mar 12 17:20:02 crc kubenswrapper[4687]: I0312 17:20:02.170083 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555600-c69vv" event={"ID":"50d22889-e03c-4ae1-affd-9829014a347e","Type":"ContainerStarted","Data":"1e16441a5192e25de94cdd679400e887a48dcbd001f6514b722819de739ebc29"} Mar 12 17:20:02 crc kubenswrapper[4687]: I0312 17:20:02.263120 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:02 crc kubenswrapper[4687]: > Mar 12 17:20:02 crc kubenswrapper[4687]: I0312 17:20:02.360753 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:02 crc kubenswrapper[4687]: > Mar 12 17:20:04 crc kubenswrapper[4687]: I0312 17:20:04.544061 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 17:20:05 crc kubenswrapper[4687]: I0312 17:20:05.207757 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555600-c69vv" event={"ID":"50d22889-e03c-4ae1-affd-9829014a347e","Type":"ContainerStarted","Data":"b02695143f83d70687c03d39710e035bb660098fe8fcfc0782035bee45e8832f"} Mar 12 17:20:05 crc kubenswrapper[4687]: I0312 17:20:05.230633 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555600-c69vv" podStartSLOduration=3.802137132 podStartE2EDuration="5.230613003s" podCreationTimestamp="2026-03-12 17:20:00 +0000 UTC" firstStartedPulling="2026-03-12 17:20:01.462427681 +0000 UTC m=+4650.426390015" lastFinishedPulling="2026-03-12 17:20:02.890903542 +0000 UTC m=+4651.854865886" observedRunningTime="2026-03-12 17:20:05.226515831 +0000 UTC m=+4654.190478185" watchObservedRunningTime="2026-03-12 17:20:05.230613003 +0000 UTC m=+4654.194575347" Mar 12 17:20:07 crc kubenswrapper[4687]: I0312 17:20:07.228466 4687 generic.go:334] "Generic (PLEG): container finished" podID="50d22889-e03c-4ae1-affd-9829014a347e" containerID="b02695143f83d70687c03d39710e035bb660098fe8fcfc0782035bee45e8832f" exitCode=0 Mar 12 17:20:07 crc kubenswrapper[4687]: I0312 17:20:07.228566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555600-c69vv" event={"ID":"50d22889-e03c-4ae1-affd-9829014a347e","Type":"ContainerDied","Data":"b02695143f83d70687c03d39710e035bb660098fe8fcfc0782035bee45e8832f"} Mar 12 17:20:08 crc kubenswrapper[4687]: I0312 17:20:08.977318 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.021720 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkd84\" (UniqueName: \"kubernetes.io/projected/50d22889-e03c-4ae1-affd-9829014a347e-kube-api-access-pkd84\") pod \"50d22889-e03c-4ae1-affd-9829014a347e\" (UID: \"50d22889-e03c-4ae1-affd-9829014a347e\") " Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.029896 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d22889-e03c-4ae1-affd-9829014a347e-kube-api-access-pkd84" (OuterVolumeSpecName: "kube-api-access-pkd84") pod "50d22889-e03c-4ae1-affd-9829014a347e" (UID: "50d22889-e03c-4ae1-affd-9829014a347e"). InnerVolumeSpecName "kube-api-access-pkd84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.125258 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkd84\" (UniqueName: \"kubernetes.io/projected/50d22889-e03c-4ae1-affd-9829014a347e-kube-api-access-pkd84\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.255732 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555600-c69vv" event={"ID":"50d22889-e03c-4ae1-affd-9829014a347e","Type":"ContainerDied","Data":"1e16441a5192e25de94cdd679400e887a48dcbd001f6514b722819de739ebc29"} Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.255789 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555600-c69vv" Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.256401 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e16441a5192e25de94cdd679400e887a48dcbd001f6514b722819de739ebc29" Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.337052 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555594-7pkcf"] Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.352044 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555594-7pkcf"] Mar 12 17:20:09 crc kubenswrapper[4687]: I0312 17:20:09.748915 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f" path="/var/lib/kubelet/pods/568b8ff3-fa3f-4e56-9e64-f6b3c2e4841f/volumes" Mar 12 17:20:10 crc kubenswrapper[4687]: I0312 17:20:10.269906 4687 generic.go:334] "Generic (PLEG): container finished" podID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerID="ea41b31514deb9be29b403ce59a185193818bc6f36c2f6d64a8ff42713ef505b" exitCode=137 Mar 12 17:20:10 crc kubenswrapper[4687]: I0312 17:20:10.270990 4687 generic.go:334] "Generic (PLEG): container finished" podID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerID="7ec748c70c33e32b0e4a25f4dc5274bf1e3409304ffb22f9025e85185b13057d" exitCode=137 Mar 12 17:20:10 crc kubenswrapper[4687]: I0312 17:20:10.271079 4687 generic.go:334] "Generic (PLEG): container finished" podID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerID="a062a9b1bcb50af1be4034c161b6d2660ad366b9e8932a8c9f0029f9e85707dd" exitCode=137 Mar 12 17:20:10 crc kubenswrapper[4687]: I0312 17:20:10.271180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerDied","Data":"ea41b31514deb9be29b403ce59a185193818bc6f36c2f6d64a8ff42713ef505b"} Mar 12 17:20:10 crc kubenswrapper[4687]: I0312 17:20:10.271278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerDied","Data":"7ec748c70c33e32b0e4a25f4dc5274bf1e3409304ffb22f9025e85185b13057d"} Mar 12 17:20:10 crc kubenswrapper[4687]: I0312 17:20:10.271454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerDied","Data":"a062a9b1bcb50af1be4034c161b6d2660ad366b9e8932a8c9f0029f9e85707dd"} Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.002353 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-ceilometer-tls-certs\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-sg-core-conf-yaml\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076226 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpzqb\" (UniqueName: \"kubernetes.io/projected/f857165a-722b-4b35-92a8-e8e7b708eeb4-kube-api-access-lpzqb\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-scripts\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-combined-ca-bundle\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-run-httpd\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-config-data\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.076578 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-log-httpd\") pod \"f857165a-722b-4b35-92a8-e8e7b708eeb4\" (UID: \"f857165a-722b-4b35-92a8-e8e7b708eeb4\") " Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.078672 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.078947 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.086792 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f857165a-722b-4b35-92a8-e8e7b708eeb4-kube-api-access-lpzqb" (OuterVolumeSpecName: "kube-api-access-lpzqb") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "kube-api-access-lpzqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.106530 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-scripts" (OuterVolumeSpecName: "scripts") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.168169 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.179026 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.179057 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpzqb\" (UniqueName: \"kubernetes.io/projected/f857165a-722b-4b35-92a8-e8e7b708eeb4-kube-api-access-lpzqb\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.179068 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.179083 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.179094 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f857165a-722b-4b35-92a8-e8e7b708eeb4-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.266461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.269512 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.273890 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.284982 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.285008 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.303614 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-config-data" (OuterVolumeSpecName: "config-data") pod "f857165a-722b-4b35-92a8-e8e7b708eeb4" (UID: "f857165a-722b-4b35-92a8-e8e7b708eeb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.340321 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f857165a-722b-4b35-92a8-e8e7b708eeb4","Type":"ContainerDied","Data":"c5d934d690cc3d8d6058bf2bd18fd4462e00d493c35cb464544048e03a5b2f62"} Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.340632 4687 scope.go:117] "RemoveContainer" containerID="ea41b31514deb9be29b403ce59a185193818bc6f36c2f6d64a8ff42713ef505b" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.340406 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.367283 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.387636 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f857165a-722b-4b35-92a8-e8e7b708eeb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.414859 4687 scope.go:117] "RemoveContainer" containerID="19dc17f0d806a377e7dfc42eab2af486ef59dde79564e1f0eb8e88666cf60d88" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.446399 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.459027 4687 scope.go:117] "RemoveContainer" containerID="7ec748c70c33e32b0e4a25f4dc5274bf1e3409304ffb22f9025e85185b13057d" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.466489 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.497075 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:20:11 crc kubenswrapper[4687]: E0312 17:20:11.497601 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d22889-e03c-4ae1-affd-9829014a347e" containerName="oc" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.497618 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d22889-e03c-4ae1-affd-9829014a347e" containerName="oc" Mar 12 17:20:11 crc kubenswrapper[4687]: E0312 17:20:11.497644 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-notification-agent" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.497650 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-notification-agent" Mar 12 17:20:11 crc kubenswrapper[4687]: E0312 17:20:11.497699 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="sg-core" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.497705 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="sg-core" Mar 12 17:20:11 crc kubenswrapper[4687]: E0312 17:20:11.497740 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-central-agent" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.497746 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-central-agent" Mar 12 17:20:11 crc kubenswrapper[4687]: E0312 17:20:11.497767 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="proxy-httpd" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.497773 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="proxy-httpd" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.498017 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="sg-core" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.498029 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-notification-agent" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.498040 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d22889-e03c-4ae1-affd-9829014a347e" containerName="oc" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.498053 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="proxy-httpd" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.498070 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" containerName="ceilometer-central-agent" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.499192 4687 scope.go:117] "RemoveContainer" containerID="a062a9b1bcb50af1be4034c161b6d2660ad366b9e8932a8c9f0029f9e85707dd" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.500311 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.506567 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.506812 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.506924 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.518875 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.531693 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6wwd"] Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596398 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596464 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7m8\" (UniqueName: \"kubernetes.io/projected/135f098d-bdcf-4edd-86b2-b1281313f68c-kube-api-access-lk7m8\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596563 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596801 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135f098d-bdcf-4edd-86b2-b1281313f68c-run-httpd\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596880 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-config-data\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135f098d-bdcf-4edd-86b2-b1281313f68c-log-httpd\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.596978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-scripts\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135f098d-bdcf-4edd-86b2-b1281313f68c-run-httpd\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699312 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-config-data\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699338 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135f098d-bdcf-4edd-86b2-b1281313f68c-log-httpd\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-scripts\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699509 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7m8\" (UniqueName: \"kubernetes.io/projected/135f098d-bdcf-4edd-86b2-b1281313f68c-kube-api-access-lk7m8\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.699592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.700053 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135f098d-bdcf-4edd-86b2-b1281313f68c-run-httpd\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.700868 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/135f098d-bdcf-4edd-86b2-b1281313f68c-log-httpd\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.704878 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.705497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-scripts\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.705710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.709303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-config-data\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.713136 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/135f098d-bdcf-4edd-86b2-b1281313f68c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.721154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7m8\" (UniqueName: \"kubernetes.io/projected/135f098d-bdcf-4edd-86b2-b1281313f68c-kube-api-access-lk7m8\") pod \"ceilometer-0\" (UID: \"135f098d-bdcf-4edd-86b2-b1281313f68c\") " pod="openstack/ceilometer-0" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.746562 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f857165a-722b-4b35-92a8-e8e7b708eeb4" path="/var/lib/kubelet/pods/f857165a-722b-4b35-92a8-e8e7b708eeb4/volumes" Mar 12 17:20:11 crc kubenswrapper[4687]: I0312 17:20:11.850001 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 17:20:12 crc kubenswrapper[4687]: I0312 17:20:12.367710 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:12 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:12 crc kubenswrapper[4687]: > Mar 12 17:20:12 crc kubenswrapper[4687]: W0312 17:20:12.452687 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod135f098d_bdcf_4edd_86b2_b1281313f68c.slice/crio-ae02a882556c7fdd6844d9b91da670d238cbe760242824b78639da03b56eb726 WatchSource:0}: Error finding container ae02a882556c7fdd6844d9b91da670d238cbe760242824b78639da03b56eb726: Status 404 returned error can't find the container with id ae02a882556c7fdd6844d9b91da670d238cbe760242824b78639da03b56eb726 Mar 12 17:20:12 crc kubenswrapper[4687]: I0312 17:20:12.456124 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 17:20:13 crc kubenswrapper[4687]: I0312 17:20:13.391717 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6wwd" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" containerID="cri-o://af86c5dbe28e2e332451abeae36d7da75eae55e37a2383a5b2f0c8283655c1fc" gracePeriod=2 Mar 12 17:20:13 crc kubenswrapper[4687]: I0312 17:20:13.391870 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135f098d-bdcf-4edd-86b2-b1281313f68c","Type":"ContainerStarted","Data":"5f07eff2c59bbe5b75eb6ac1dc2bb6d1867eabee1166780c73b82dbf5e2241df"} Mar 12 17:20:13 crc kubenswrapper[4687]: I0312 17:20:13.393212 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135f098d-bdcf-4edd-86b2-b1281313f68c","Type":"ContainerStarted","Data":"ae02a882556c7fdd6844d9b91da670d238cbe760242824b78639da03b56eb726"} Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.436913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135f098d-bdcf-4edd-86b2-b1281313f68c","Type":"ContainerStarted","Data":"330dbc2bf849c19c00b390bd17d8985988a6a02df15c5d65ec7a2c0c4ba18766"} Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.442164 4687 generic.go:334] "Generic (PLEG): container finished" podID="08357120-f2bb-4324-ade8-a019ed106514" containerID="af86c5dbe28e2e332451abeae36d7da75eae55e37a2383a5b2f0c8283655c1fc" exitCode=0 Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.442207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerDied","Data":"af86c5dbe28e2e332451abeae36d7da75eae55e37a2383a5b2f0c8283655c1fc"} Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.442239 4687 scope.go:117] "RemoveContainer" containerID="d657b183fae01432ff86dd2f57d88fb3f40d952a7f853039c531bbe8a5d0cc6b" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.687064 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.799910 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5k87\" (UniqueName: \"kubernetes.io/projected/08357120-f2bb-4324-ade8-a019ed106514-kube-api-access-m5k87\") pod \"08357120-f2bb-4324-ade8-a019ed106514\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.800144 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-catalog-content\") pod \"08357120-f2bb-4324-ade8-a019ed106514\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.800231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-utilities\") pod \"08357120-f2bb-4324-ade8-a019ed106514\" (UID: \"08357120-f2bb-4324-ade8-a019ed106514\") " Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.800639 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-utilities" (OuterVolumeSpecName: "utilities") pod "08357120-f2bb-4324-ade8-a019ed106514" (UID: "08357120-f2bb-4324-ade8-a019ed106514"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.801208 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.827545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08357120-f2bb-4324-ade8-a019ed106514-kube-api-access-m5k87" (OuterVolumeSpecName: "kube-api-access-m5k87") pod "08357120-f2bb-4324-ade8-a019ed106514" (UID: "08357120-f2bb-4324-ade8-a019ed106514"). InnerVolumeSpecName "kube-api-access-m5k87". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.858632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08357120-f2bb-4324-ade8-a019ed106514" (UID: "08357120-f2bb-4324-ade8-a019ed106514"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.903201 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08357120-f2bb-4324-ade8-a019ed106514-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:14 crc kubenswrapper[4687]: I0312 17:20:14.903241 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5k87\" (UniqueName: \"kubernetes.io/projected/08357120-f2bb-4324-ade8-a019ed106514-kube-api-access-m5k87\") on node \"crc\" DevicePath \"\"" Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.455156 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135f098d-bdcf-4edd-86b2-b1281313f68c","Type":"ContainerStarted","Data":"6f6346957e2640e0ccbc11fd6e082a06de0066023b2be1833eb11204fa00210d"} Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.457597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6wwd" event={"ID":"08357120-f2bb-4324-ade8-a019ed106514","Type":"ContainerDied","Data":"589c2da66497ddd830247b0bf75cb14ddeada09afc806c3898d1e28f173db42c"} Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.457632 4687 scope.go:117] "RemoveContainer" containerID="af86c5dbe28e2e332451abeae36d7da75eae55e37a2383a5b2f0c8283655c1fc" Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.457718 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6wwd" Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.483296 4687 scope.go:117] "RemoveContainer" containerID="2549d2c5bb0075907d7a6e9c762e09c1e971cf34a2319dacdf8b757ec248926f" Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.515751 4687 scope.go:117] "RemoveContainer" containerID="35f2641904d8d4bae7e6ff91445ee8f048139a0277f3b6bc7e6407490a8908a3" Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.516434 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6wwd"] Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.528160 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6wwd"] Mar 12 17:20:15 crc kubenswrapper[4687]: I0312 17:20:15.750135 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08357120-f2bb-4324-ade8-a019ed106514" path="/var/lib/kubelet/pods/08357120-f2bb-4324-ade8-a019ed106514/volumes" Mar 12 17:20:18 crc kubenswrapper[4687]: I0312 17:20:18.494349 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"135f098d-bdcf-4edd-86b2-b1281313f68c","Type":"ContainerStarted","Data":"6fba656c971ea98492b13db21127004734e7b5f13ca24e6372f9aadd577f6b21"} Mar 12 17:20:18 crc kubenswrapper[4687]: I0312 17:20:18.494983 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 17:20:18 crc kubenswrapper[4687]: I0312 17:20:18.521503 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.774039771 podStartE2EDuration="7.521480359s" podCreationTimestamp="2026-03-12 17:20:11 +0000 UTC" firstStartedPulling="2026-03-12 17:20:12.455540192 +0000 UTC m=+4661.419502536" lastFinishedPulling="2026-03-12 17:20:17.20298077 +0000 UTC m=+4666.166943124" observedRunningTime="2026-03-12 17:20:18.512252927 +0000 UTC m=+4667.476215271" watchObservedRunningTime="2026-03-12 17:20:18.521480359 +0000 UTC m=+4667.485442703" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.032451 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-65ct5/must-gather-h4b7p"] Mar 12 17:20:20 crc kubenswrapper[4687]: E0312 17:20:20.033376 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="extract-content" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.033415 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="extract-content" Mar 12 17:20:20 crc kubenswrapper[4687]: E0312 17:20:20.033439 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.033451 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" Mar 12 17:20:20 crc kubenswrapper[4687]: E0312 17:20:20.033481 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.033491 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" Mar 12 17:20:20 crc kubenswrapper[4687]: E0312 17:20:20.033505 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="extract-utilities" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.033513 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="extract-utilities" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.033847 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.033886 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="08357120-f2bb-4324-ade8-a019ed106514" containerName="registry-server" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.041342 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.043327 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-65ct5"/"openshift-service-ca.crt" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.048884 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-65ct5"/"default-dockercfg-gt9b8" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.049187 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-65ct5"/"kube-root-ca.crt" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.131091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd1bdda3-0007-425e-b879-ba23a52e4a1a-must-gather-output\") pod \"must-gather-h4b7p\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.131198 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r42p\" (UniqueName: \"kubernetes.io/projected/dd1bdda3-0007-425e-b879-ba23a52e4a1a-kube-api-access-8r42p\") pod \"must-gather-h4b7p\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.233132 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd1bdda3-0007-425e-b879-ba23a52e4a1a-must-gather-output\") pod \"must-gather-h4b7p\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.233265 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r42p\" (UniqueName: \"kubernetes.io/projected/dd1bdda3-0007-425e-b879-ba23a52e4a1a-kube-api-access-8r42p\") pod \"must-gather-h4b7p\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.233671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd1bdda3-0007-425e-b879-ba23a52e4a1a-must-gather-output\") pod \"must-gather-h4b7p\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.244349 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-65ct5/must-gather-h4b7p"] Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.270312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r42p\" (UniqueName: \"kubernetes.io/projected/dd1bdda3-0007-425e-b879-ba23a52e4a1a-kube-api-access-8r42p\") pod \"must-gather-h4b7p\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:20 crc kubenswrapper[4687]: I0312 17:20:20.365072 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:20:21 crc kubenswrapper[4687]: I0312 17:20:21.263328 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-65ct5/must-gather-h4b7p"] Mar 12 17:20:21 crc kubenswrapper[4687]: W0312 17:20:21.272770 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1bdda3_0007_425e_b879_ba23a52e4a1a.slice/crio-707676fff4d47e83bfdd3ef527b3467be0ad52a2a07ead21a08d11e094f78a3c WatchSource:0}: Error finding container 707676fff4d47e83bfdd3ef527b3467be0ad52a2a07ead21a08d11e094f78a3c: Status 404 returned error can't find the container with id 707676fff4d47e83bfdd3ef527b3467be0ad52a2a07ead21a08d11e094f78a3c Mar 12 17:20:21 crc kubenswrapper[4687]: I0312 17:20:21.536939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/must-gather-h4b7p" event={"ID":"dd1bdda3-0007-425e-b879-ba23a52e4a1a","Type":"ContainerStarted","Data":"707676fff4d47e83bfdd3ef527b3467be0ad52a2a07ead21a08d11e094f78a3c"} Mar 12 17:20:22 crc kubenswrapper[4687]: I0312 17:20:22.360305 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:22 crc kubenswrapper[4687]: > Mar 12 17:20:32 crc kubenswrapper[4687]: I0312 17:20:32.328440 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:32 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:32 crc kubenswrapper[4687]: > Mar 12 17:20:32 crc kubenswrapper[4687]: I0312 17:20:32.329174 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:20:32 crc kubenswrapper[4687]: I0312 17:20:32.330269 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"a9de538fcff48439242be7c371239a555470e952a3a01ce2563c6c69b1b496da"} pod="openshift-marketplace/redhat-operators-9g44w" containerMessage="Container registry-server failed startup probe, will be restarted" Mar 12 17:20:32 crc kubenswrapper[4687]: I0312 17:20:32.330320 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" containerID="cri-o://a9de538fcff48439242be7c371239a555470e952a3a01ce2563c6c69b1b496da" gracePeriod=30 Mar 12 17:20:33 crc kubenswrapper[4687]: I0312 17:20:33.766034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/must-gather-h4b7p" event={"ID":"dd1bdda3-0007-425e-b879-ba23a52e4a1a","Type":"ContainerStarted","Data":"fbfdaa462e0c0c4efca4510a7a405fbe3af7cb26dfd54d8bdcec3b5275c8092c"} Mar 12 17:20:34 crc kubenswrapper[4687]: I0312 17:20:34.787639 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/must-gather-h4b7p" event={"ID":"dd1bdda3-0007-425e-b879-ba23a52e4a1a","Type":"ContainerStarted","Data":"bb3ccb253e07bfdeb35c1df40908707dfe7753b92dfcdebca32514e5bd401d91"} Mar 12 17:20:34 crc kubenswrapper[4687]: I0312 17:20:34.791665 4687 generic.go:334] "Generic (PLEG): container finished" podID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerID="a9de538fcff48439242be7c371239a555470e952a3a01ce2563c6c69b1b496da" exitCode=0 Mar 12 17:20:34 crc kubenswrapper[4687]: I0312 17:20:34.791700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerDied","Data":"a9de538fcff48439242be7c371239a555470e952a3a01ce2563c6c69b1b496da"} Mar 12 17:20:34 crc kubenswrapper[4687]: I0312 17:20:34.804968 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-65ct5/must-gather-h4b7p" podStartSLOduration=3.889273781 podStartE2EDuration="15.80494772s" podCreationTimestamp="2026-03-12 17:20:19 +0000 UTC" firstStartedPulling="2026-03-12 17:20:21.286483551 +0000 UTC m=+4670.250445895" lastFinishedPulling="2026-03-12 17:20:33.20215749 +0000 UTC m=+4682.166119834" observedRunningTime="2026-03-12 17:20:34.803429728 +0000 UTC m=+4683.767392102" watchObservedRunningTime="2026-03-12 17:20:34.80494772 +0000 UTC m=+4683.768910064" Mar 12 17:20:35 crc kubenswrapper[4687]: I0312 17:20:35.804914 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerStarted","Data":"9113e35d21ff0f7f61efa835aa84611b34881cc964c8a07f560fc65581d0fbb7"} Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.263061 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-65ct5/crc-debug-6774n"] Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.269374 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.276621 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.277440 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.324193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xw8d\" (UniqueName: \"kubernetes.io/projected/f8821cc2-cf5a-4e29-9761-39319969362b-kube-api-access-6xw8d\") pod \"crc-debug-6774n\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.324895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8821cc2-cf5a-4e29-9761-39319969362b-host\") pod \"crc-debug-6774n\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.426832 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xw8d\" (UniqueName: \"kubernetes.io/projected/f8821cc2-cf5a-4e29-9761-39319969362b-kube-api-access-6xw8d\") pod \"crc-debug-6774n\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.426946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8821cc2-cf5a-4e29-9761-39319969362b-host\") pod \"crc-debug-6774n\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.427946 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8821cc2-cf5a-4e29-9761-39319969362b-host\") pod \"crc-debug-6774n\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.463347 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xw8d\" (UniqueName: \"kubernetes.io/projected/f8821cc2-cf5a-4e29-9761-39319969362b-kube-api-access-6xw8d\") pod \"crc-debug-6774n\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.618681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:20:41 crc kubenswrapper[4687]: W0312 17:20:41.671185 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8821cc2_cf5a_4e29_9761_39319969362b.slice/crio-896deb08d219db8491728ca598be18a94eadf7413c347312029b76d8abccd342 WatchSource:0}: Error finding container 896deb08d219db8491728ca598be18a94eadf7413c347312029b76d8abccd342: Status 404 returned error can't find the container with id 896deb08d219db8491728ca598be18a94eadf7413c347312029b76d8abccd342 Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.887180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-6774n" event={"ID":"f8821cc2-cf5a-4e29-9761-39319969362b","Type":"ContainerStarted","Data":"896deb08d219db8491728ca598be18a94eadf7413c347312029b76d8abccd342"} Mar 12 17:20:41 crc kubenswrapper[4687]: I0312 17:20:41.920580 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 17:20:42 crc kubenswrapper[4687]: I0312 17:20:42.343090 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:42 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:42 crc kubenswrapper[4687]: > Mar 12 17:20:52 crc kubenswrapper[4687]: I0312 17:20:52.332459 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:20:52 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:20:52 crc kubenswrapper[4687]: > Mar 12 17:20:56 crc kubenswrapper[4687]: I0312 17:20:56.123737 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-6774n" event={"ID":"f8821cc2-cf5a-4e29-9761-39319969362b","Type":"ContainerStarted","Data":"c45c339ec6a42fdf1a8b2396946c4dc158ebd86656d16c0f6fecf2c93f578a9f"} Mar 12 17:20:56 crc kubenswrapper[4687]: I0312 17:20:56.149470 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-65ct5/crc-debug-6774n" podStartSLOduration=1.690857486 podStartE2EDuration="15.149445401s" podCreationTimestamp="2026-03-12 17:20:41 +0000 UTC" firstStartedPulling="2026-03-12 17:20:41.673570651 +0000 UTC m=+4690.637532995" lastFinishedPulling="2026-03-12 17:20:55.132158566 +0000 UTC m=+4704.096120910" observedRunningTime="2026-03-12 17:20:56.13804842 +0000 UTC m=+4705.102010764" watchObservedRunningTime="2026-03-12 17:20:56.149445401 +0000 UTC m=+4705.113407745" Mar 12 17:20:58 crc kubenswrapper[4687]: I0312 17:20:58.726149 4687 scope.go:117] "RemoveContainer" containerID="d83f5a86ad6f7428727a435dd81e20945b76eabad8961f2a983b09ee38248cb9" Mar 12 17:21:02 crc kubenswrapper[4687]: I0312 17:21:02.342036 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:21:02 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:21:02 crc kubenswrapper[4687]: > Mar 12 17:21:05 crc kubenswrapper[4687]: I0312 17:21:05.226890 4687 generic.go:334] "Generic (PLEG): container finished" podID="66bdc25f-19d1-4b63-83e6-ad246f6722e8" containerID="90fb3737b76133d7293c829b2e35edef4ace11e5b9f9e88e9723dc1cd643ef71" exitCode=0 Mar 12 17:21:05 crc kubenswrapper[4687]: I0312 17:21:05.227424 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" event={"ID":"66bdc25f-19d1-4b63-83e6-ad246f6722e8","Type":"ContainerDied","Data":"90fb3737b76133d7293c829b2e35edef4ace11e5b9f9e88e9723dc1cd643ef71"} Mar 12 17:21:05 crc kubenswrapper[4687]: I0312 17:21:05.227452 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" event={"ID":"66bdc25f-19d1-4b63-83e6-ad246f6722e8","Type":"ContainerStarted","Data":"3c986fdb78867cca0ff52cdc6b8816d99119758d377384075b1c2c7dd7aa779d"} Mar 12 17:21:12 crc kubenswrapper[4687]: I0312 17:21:12.348070 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:21:12 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:21:12 crc kubenswrapper[4687]: > Mar 12 17:21:13 crc kubenswrapper[4687]: I0312 17:21:13.147066 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 17:21:13 crc kubenswrapper[4687]: I0312 17:21:13.147447 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 17:21:22 crc kubenswrapper[4687]: I0312 17:21:22.345297 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" probeResult="failure" output=< Mar 12 17:21:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:21:22 crc kubenswrapper[4687]: > Mar 12 17:21:31 crc kubenswrapper[4687]: I0312 17:21:31.361524 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:21:31 crc kubenswrapper[4687]: I0312 17:21:31.421620 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:21:31 crc kubenswrapper[4687]: I0312 17:21:31.601303 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9g44w"] Mar 12 17:21:32 crc kubenswrapper[4687]: I0312 17:21:32.532862 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9g44w" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" containerID="cri-o://9113e35d21ff0f7f61efa835aa84611b34881cc964c8a07f560fc65581d0fbb7" gracePeriod=2 Mar 12 17:21:33 crc kubenswrapper[4687]: I0312 17:21:33.181676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 17:21:33 crc kubenswrapper[4687]: I0312 17:21:33.187176 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5955fd9895-8btf6" Mar 12 17:21:33 crc kubenswrapper[4687]: I0312 17:21:33.546642 4687 generic.go:334] "Generic (PLEG): container finished" podID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerID="9113e35d21ff0f7f61efa835aa84611b34881cc964c8a07f560fc65581d0fbb7" exitCode=0 Mar 12 17:21:33 crc kubenswrapper[4687]: I0312 17:21:33.547656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerDied","Data":"9113e35d21ff0f7f61efa835aa84611b34881cc964c8a07f560fc65581d0fbb7"} Mar 12 17:21:33 crc kubenswrapper[4687]: I0312 17:21:33.547689 4687 scope.go:117] "RemoveContainer" containerID="a9de538fcff48439242be7c371239a555470e952a3a01ce2563c6c69b1b496da" Mar 12 17:21:33 crc kubenswrapper[4687]: I0312 17:21:33.891706 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.009145 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-catalog-content\") pod \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.009393 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-utilities\") pod \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.009447 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7st5\" (UniqueName: \"kubernetes.io/projected/171a712a-f2d7-4b3b-8d7a-c413b045e54c-kube-api-access-x7st5\") pod \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\" (UID: \"171a712a-f2d7-4b3b-8d7a-c413b045e54c\") " Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.017113 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/171a712a-f2d7-4b3b-8d7a-c413b045e54c-kube-api-access-x7st5" (OuterVolumeSpecName: "kube-api-access-x7st5") pod "171a712a-f2d7-4b3b-8d7a-c413b045e54c" (UID: "171a712a-f2d7-4b3b-8d7a-c413b045e54c"). InnerVolumeSpecName "kube-api-access-x7st5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.019486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-utilities" (OuterVolumeSpecName: "utilities") pod "171a712a-f2d7-4b3b-8d7a-c413b045e54c" (UID: "171a712a-f2d7-4b3b-8d7a-c413b045e54c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.112253 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.112555 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7st5\" (UniqueName: \"kubernetes.io/projected/171a712a-f2d7-4b3b-8d7a-c413b045e54c-kube-api-access-x7st5\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.159941 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "171a712a-f2d7-4b3b-8d7a-c413b045e54c" (UID: "171a712a-f2d7-4b3b-8d7a-c413b045e54c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.215026 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/171a712a-f2d7-4b3b-8d7a-c413b045e54c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.571605 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g44w" event={"ID":"171a712a-f2d7-4b3b-8d7a-c413b045e54c","Type":"ContainerDied","Data":"e950631412c712d215aefadf7b17151271474848d56ce3df382352b0ab23fc30"} Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.571654 4687 scope.go:117] "RemoveContainer" containerID="9113e35d21ff0f7f61efa835aa84611b34881cc964c8a07f560fc65581d0fbb7" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.571727 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g44w" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.598153 4687 scope.go:117] "RemoveContainer" containerID="9f338c064484e7fe00f8ae2266ee1d3ba072289aae7c6a57467f1cbe9882afca" Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.611752 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9g44w"] Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.626021 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9g44w"] Mar 12 17:21:34 crc kubenswrapper[4687]: I0312 17:21:34.648763 4687 scope.go:117] "RemoveContainer" containerID="f9dbbdf30bfd22423640d24dd9dedf5b0c9b20eb4d228f2dc22120ef9cb013be" Mar 12 17:21:35 crc kubenswrapper[4687]: I0312 17:21:35.753033 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" path="/var/lib/kubelet/pods/171a712a-f2d7-4b3b-8d7a-c413b045e54c/volumes" Mar 12 17:21:41 crc kubenswrapper[4687]: I0312 17:21:41.659471 4687 generic.go:334] "Generic (PLEG): container finished" podID="f8821cc2-cf5a-4e29-9761-39319969362b" containerID="c45c339ec6a42fdf1a8b2396946c4dc158ebd86656d16c0f6fecf2c93f578a9f" exitCode=0 Mar 12 17:21:41 crc kubenswrapper[4687]: I0312 17:21:41.659541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-6774n" event={"ID":"f8821cc2-cf5a-4e29-9761-39319969362b","Type":"ContainerDied","Data":"c45c339ec6a42fdf1a8b2396946c4dc158ebd86656d16c0f6fecf2c93f578a9f"} Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.806679 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.842275 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-65ct5/crc-debug-6774n"] Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.853560 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-65ct5/crc-debug-6774n"] Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.862254 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xw8d\" (UniqueName: \"kubernetes.io/projected/f8821cc2-cf5a-4e29-9761-39319969362b-kube-api-access-6xw8d\") pod \"f8821cc2-cf5a-4e29-9761-39319969362b\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.862327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8821cc2-cf5a-4e29-9761-39319969362b-host\") pod \"f8821cc2-cf5a-4e29-9761-39319969362b\" (UID: \"f8821cc2-cf5a-4e29-9761-39319969362b\") " Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.862899 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8821cc2-cf5a-4e29-9761-39319969362b-host" (OuterVolumeSpecName: "host") pod "f8821cc2-cf5a-4e29-9761-39319969362b" (UID: "f8821cc2-cf5a-4e29-9761-39319969362b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.863455 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8821cc2-cf5a-4e29-9761-39319969362b-host\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.868868 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8821cc2-cf5a-4e29-9761-39319969362b-kube-api-access-6xw8d" (OuterVolumeSpecName: "kube-api-access-6xw8d") pod "f8821cc2-cf5a-4e29-9761-39319969362b" (UID: "f8821cc2-cf5a-4e29-9761-39319969362b"). InnerVolumeSpecName "kube-api-access-6xw8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:21:42 crc kubenswrapper[4687]: I0312 17:21:42.966193 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xw8d\" (UniqueName: \"kubernetes.io/projected/f8821cc2-cf5a-4e29-9761-39319969362b-kube-api-access-6xw8d\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:43 crc kubenswrapper[4687]: I0312 17:21:43.685889 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="896deb08d219db8491728ca598be18a94eadf7413c347312029b76d8abccd342" Mar 12 17:21:43 crc kubenswrapper[4687]: I0312 17:21:43.685930 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-6774n" Mar 12 17:21:43 crc kubenswrapper[4687]: I0312 17:21:43.772630 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8821cc2-cf5a-4e29-9761-39319969362b" path="/var/lib/kubelet/pods/f8821cc2-cf5a-4e29-9761-39319969362b/volumes" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.047039 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-65ct5/crc-debug-58jnc"] Mar 12 17:21:44 crc kubenswrapper[4687]: E0312 17:21:44.047651 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="extract-utilities" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.047663 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="extract-utilities" Mar 12 17:21:44 crc kubenswrapper[4687]: E0312 17:21:44.047679 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.047686 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" Mar 12 17:21:44 crc kubenswrapper[4687]: E0312 17:21:44.047725 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="extract-content" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.047731 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="extract-content" Mar 12 17:21:44 crc kubenswrapper[4687]: E0312 17:21:44.047746 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.047751 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" Mar 12 17:21:44 crc kubenswrapper[4687]: E0312 17:21:44.047765 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8821cc2-cf5a-4e29-9761-39319969362b" containerName="container-00" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.047770 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8821cc2-cf5a-4e29-9761-39319969362b" containerName="container-00" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.048031 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.048046 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="171a712a-f2d7-4b3b-8d7a-c413b045e54c" containerName="registry-server" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.048066 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8821cc2-cf5a-4e29-9761-39319969362b" containerName="container-00" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.048850 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.203449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xttw\" (UniqueName: \"kubernetes.io/projected/cf54d06b-686d-448b-9fa4-a764fe39e1f0-kube-api-access-7xttw\") pod \"crc-debug-58jnc\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.203594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf54d06b-686d-448b-9fa4-a764fe39e1f0-host\") pod \"crc-debug-58jnc\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.306441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xttw\" (UniqueName: \"kubernetes.io/projected/cf54d06b-686d-448b-9fa4-a764fe39e1f0-kube-api-access-7xttw\") pod \"crc-debug-58jnc\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.306578 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf54d06b-686d-448b-9fa4-a764fe39e1f0-host\") pod \"crc-debug-58jnc\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.306730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf54d06b-686d-448b-9fa4-a764fe39e1f0-host\") pod \"crc-debug-58jnc\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.327424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xttw\" (UniqueName: \"kubernetes.io/projected/cf54d06b-686d-448b-9fa4-a764fe39e1f0-kube-api-access-7xttw\") pod \"crc-debug-58jnc\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.369323 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:44 crc kubenswrapper[4687]: I0312 17:21:44.698843 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-58jnc" event={"ID":"cf54d06b-686d-448b-9fa4-a764fe39e1f0","Type":"ContainerStarted","Data":"140e705c00a9a03fbd3006275cb0daf936062c4669795a1d3b55e2d48da13be0"} Mar 12 17:21:45 crc kubenswrapper[4687]: I0312 17:21:45.711217 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf54d06b-686d-448b-9fa4-a764fe39e1f0" containerID="4cbc5915c2b3b397a740aed708fa72e82e6415cb5af963ee4e9d713f068d8a91" exitCode=0 Mar 12 17:21:45 crc kubenswrapper[4687]: I0312 17:21:45.711265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-58jnc" event={"ID":"cf54d06b-686d-448b-9fa4-a764fe39e1f0","Type":"ContainerDied","Data":"4cbc5915c2b3b397a740aed708fa72e82e6415cb5af963ee4e9d713f068d8a91"} Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.671173 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-65ct5/crc-debug-58jnc"] Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.683500 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-65ct5/crc-debug-58jnc"] Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.884350 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.967984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf54d06b-686d-448b-9fa4-a764fe39e1f0-host\") pod \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.968043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xttw\" (UniqueName: \"kubernetes.io/projected/cf54d06b-686d-448b-9fa4-a764fe39e1f0-kube-api-access-7xttw\") pod \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\" (UID: \"cf54d06b-686d-448b-9fa4-a764fe39e1f0\") " Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.968344 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf54d06b-686d-448b-9fa4-a764fe39e1f0-host" (OuterVolumeSpecName: "host") pod "cf54d06b-686d-448b-9fa4-a764fe39e1f0" (UID: "cf54d06b-686d-448b-9fa4-a764fe39e1f0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.968889 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf54d06b-686d-448b-9fa4-a764fe39e1f0-host\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:46 crc kubenswrapper[4687]: I0312 17:21:46.973964 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf54d06b-686d-448b-9fa4-a764fe39e1f0-kube-api-access-7xttw" (OuterVolumeSpecName: "kube-api-access-7xttw") pod "cf54d06b-686d-448b-9fa4-a764fe39e1f0" (UID: "cf54d06b-686d-448b-9fa4-a764fe39e1f0"). InnerVolumeSpecName "kube-api-access-7xttw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.071374 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xttw\" (UniqueName: \"kubernetes.io/projected/cf54d06b-686d-448b-9fa4-a764fe39e1f0-kube-api-access-7xttw\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.738754 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-58jnc" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.748442 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf54d06b-686d-448b-9fa4-a764fe39e1f0" path="/var/lib/kubelet/pods/cf54d06b-686d-448b-9fa4-a764fe39e1f0/volumes" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.749269 4687 scope.go:117] "RemoveContainer" containerID="4cbc5915c2b3b397a740aed708fa72e82e6415cb5af963ee4e9d713f068d8a91" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.866248 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-65ct5/crc-debug-vcrg2"] Mar 12 17:21:47 crc kubenswrapper[4687]: E0312 17:21:47.866959 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf54d06b-686d-448b-9fa4-a764fe39e1f0" containerName="container-00" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.867035 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf54d06b-686d-448b-9fa4-a764fe39e1f0" containerName="container-00" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.867342 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf54d06b-686d-448b-9fa4-a764fe39e1f0" containerName="container-00" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.868274 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.989928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjjt\" (UniqueName: \"kubernetes.io/projected/8d06777a-58be-48fa-8f72-2d0238fce9d1-kube-api-access-jbjjt\") pod \"crc-debug-vcrg2\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:47 crc kubenswrapper[4687]: I0312 17:21:47.990913 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d06777a-58be-48fa-8f72-2d0238fce9d1-host\") pod \"crc-debug-vcrg2\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.093314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjjt\" (UniqueName: \"kubernetes.io/projected/8d06777a-58be-48fa-8f72-2d0238fce9d1-kube-api-access-jbjjt\") pod \"crc-debug-vcrg2\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.094087 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d06777a-58be-48fa-8f72-2d0238fce9d1-host\") pod \"crc-debug-vcrg2\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.094233 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d06777a-58be-48fa-8f72-2d0238fce9d1-host\") pod \"crc-debug-vcrg2\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.128881 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjjt\" (UniqueName: \"kubernetes.io/projected/8d06777a-58be-48fa-8f72-2d0238fce9d1-kube-api-access-jbjjt\") pod \"crc-debug-vcrg2\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.243339 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:48 crc kubenswrapper[4687]: W0312 17:21:48.292189 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d06777a_58be_48fa_8f72_2d0238fce9d1.slice/crio-1de0939b2756921d031b12971a074d341003443e328b5b71c2d9057ab72ff8ce WatchSource:0}: Error finding container 1de0939b2756921d031b12971a074d341003443e328b5b71c2d9057ab72ff8ce: Status 404 returned error can't find the container with id 1de0939b2756921d031b12971a074d341003443e328b5b71c2d9057ab72ff8ce Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.751786 4687 generic.go:334] "Generic (PLEG): container finished" podID="8d06777a-58be-48fa-8f72-2d0238fce9d1" containerID="18698474c89cc0a1729bede692cd06282c8842c15313664808cad79dfbeb16cf" exitCode=0 Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.751874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-vcrg2" event={"ID":"8d06777a-58be-48fa-8f72-2d0238fce9d1","Type":"ContainerDied","Data":"18698474c89cc0a1729bede692cd06282c8842c15313664808cad79dfbeb16cf"} Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.752292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/crc-debug-vcrg2" event={"ID":"8d06777a-58be-48fa-8f72-2d0238fce9d1","Type":"ContainerStarted","Data":"1de0939b2756921d031b12971a074d341003443e328b5b71c2d9057ab72ff8ce"} Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.795471 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-65ct5/crc-debug-vcrg2"] Mar 12 17:21:48 crc kubenswrapper[4687]: I0312 17:21:48.805887 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-65ct5/crc-debug-vcrg2"] Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.396561 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.553734 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d06777a-58be-48fa-8f72-2d0238fce9d1-host\") pod \"8d06777a-58be-48fa-8f72-2d0238fce9d1\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.553848 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d06777a-58be-48fa-8f72-2d0238fce9d1-host" (OuterVolumeSpecName: "host") pod "8d06777a-58be-48fa-8f72-2d0238fce9d1" (UID: "8d06777a-58be-48fa-8f72-2d0238fce9d1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.554115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjjt\" (UniqueName: \"kubernetes.io/projected/8d06777a-58be-48fa-8f72-2d0238fce9d1-kube-api-access-jbjjt\") pod \"8d06777a-58be-48fa-8f72-2d0238fce9d1\" (UID: \"8d06777a-58be-48fa-8f72-2d0238fce9d1\") " Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.555030 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d06777a-58be-48fa-8f72-2d0238fce9d1-host\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.563034 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d06777a-58be-48fa-8f72-2d0238fce9d1-kube-api-access-jbjjt" (OuterVolumeSpecName: "kube-api-access-jbjjt") pod "8d06777a-58be-48fa-8f72-2d0238fce9d1" (UID: "8d06777a-58be-48fa-8f72-2d0238fce9d1"). InnerVolumeSpecName "kube-api-access-jbjjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.657594 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjjt\" (UniqueName: \"kubernetes.io/projected/8d06777a-58be-48fa-8f72-2d0238fce9d1-kube-api-access-jbjjt\") on node \"crc\" DevicePath \"\"" Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.807631 4687 scope.go:117] "RemoveContainer" containerID="18698474c89cc0a1729bede692cd06282c8842c15313664808cad79dfbeb16cf" Mar 12 17:21:50 crc kubenswrapper[4687]: I0312 17:21:50.807684 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/crc-debug-vcrg2" Mar 12 17:21:51 crc kubenswrapper[4687]: I0312 17:21:51.758530 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d06777a-58be-48fa-8f72-2d0238fce9d1" path="/var/lib/kubelet/pods/8d06777a-58be-48fa-8f72-2d0238fce9d1/volumes" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.210469 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555602-cfhzw"] Mar 12 17:22:00 crc kubenswrapper[4687]: E0312 17:22:00.211671 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d06777a-58be-48fa-8f72-2d0238fce9d1" containerName="container-00" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.211690 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d06777a-58be-48fa-8f72-2d0238fce9d1" containerName="container-00" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.212064 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d06777a-58be-48fa-8f72-2d0238fce9d1" containerName="container-00" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.213128 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.217345 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.218622 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.224883 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.225521 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555602-cfhzw"] Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.307419 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8l6\" (UniqueName: \"kubernetes.io/projected/bd24dd7f-8745-4439-82d6-af2eca4e4885-kube-api-access-rh8l6\") pod \"auto-csr-approver-29555602-cfhzw\" (UID: \"bd24dd7f-8745-4439-82d6-af2eca4e4885\") " pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.410180 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8l6\" (UniqueName: \"kubernetes.io/projected/bd24dd7f-8745-4439-82d6-af2eca4e4885-kube-api-access-rh8l6\") pod \"auto-csr-approver-29555602-cfhzw\" (UID: \"bd24dd7f-8745-4439-82d6-af2eca4e4885\") " pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.429023 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8l6\" (UniqueName: \"kubernetes.io/projected/bd24dd7f-8745-4439-82d6-af2eca4e4885-kube-api-access-rh8l6\") pod \"auto-csr-approver-29555602-cfhzw\" (UID: \"bd24dd7f-8745-4439-82d6-af2eca4e4885\") " pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:00 crc kubenswrapper[4687]: I0312 17:22:00.544738 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:01 crc kubenswrapper[4687]: I0312 17:22:01.074730 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555602-cfhzw"] Mar 12 17:22:01 crc kubenswrapper[4687]: W0312 17:22:01.085800 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd24dd7f_8745_4439_82d6_af2eca4e4885.slice/crio-6d618f10dd75bb3307b6197a967818d1224ad82bbafcee7a35833d7f99e30d3f WatchSource:0}: Error finding container 6d618f10dd75bb3307b6197a967818d1224ad82bbafcee7a35833d7f99e30d3f: Status 404 returned error can't find the container with id 6d618f10dd75bb3307b6197a967818d1224ad82bbafcee7a35833d7f99e30d3f Mar 12 17:22:01 crc kubenswrapper[4687]: I0312 17:22:01.954990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" event={"ID":"bd24dd7f-8745-4439-82d6-af2eca4e4885","Type":"ContainerStarted","Data":"6d618f10dd75bb3307b6197a967818d1224ad82bbafcee7a35833d7f99e30d3f"} Mar 12 17:22:03 crc kubenswrapper[4687]: I0312 17:22:03.982627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" event={"ID":"bd24dd7f-8745-4439-82d6-af2eca4e4885","Type":"ContainerStarted","Data":"2900e6e94760a6f17dbef2b1f260fd91883c009b219513d170459546079872cb"} Mar 12 17:22:04 crc kubenswrapper[4687]: I0312 17:22:04.005326 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" podStartSLOduration=2.969224867 podStartE2EDuration="4.005309095s" podCreationTimestamp="2026-03-12 17:22:00 +0000 UTC" firstStartedPulling="2026-03-12 17:22:01.090822542 +0000 UTC m=+4770.054784926" lastFinishedPulling="2026-03-12 17:22:02.12690679 +0000 UTC m=+4771.090869154" observedRunningTime="2026-03-12 17:22:03.999603419 +0000 UTC m=+4772.963565763" watchObservedRunningTime="2026-03-12 17:22:04.005309095 +0000 UTC m=+4772.969271439" Mar 12 17:22:05 crc kubenswrapper[4687]: I0312 17:22:05.001198 4687 generic.go:334] "Generic (PLEG): container finished" podID="bd24dd7f-8745-4439-82d6-af2eca4e4885" containerID="2900e6e94760a6f17dbef2b1f260fd91883c009b219513d170459546079872cb" exitCode=0 Mar 12 17:22:05 crc kubenswrapper[4687]: I0312 17:22:05.001322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" event={"ID":"bd24dd7f-8745-4439-82d6-af2eca4e4885","Type":"ContainerDied","Data":"2900e6e94760a6f17dbef2b1f260fd91883c009b219513d170459546079872cb"} Mar 12 17:22:07 crc kubenswrapper[4687]: I0312 17:22:07.023557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" event={"ID":"bd24dd7f-8745-4439-82d6-af2eca4e4885","Type":"ContainerDied","Data":"6d618f10dd75bb3307b6197a967818d1224ad82bbafcee7a35833d7f99e30d3f"} Mar 12 17:22:07 crc kubenswrapper[4687]: I0312 17:22:07.024057 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d618f10dd75bb3307b6197a967818d1224ad82bbafcee7a35833d7f99e30d3f" Mar 12 17:22:07 crc kubenswrapper[4687]: I0312 17:22:07.181665 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:07 crc kubenswrapper[4687]: I0312 17:22:07.314832 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8l6\" (UniqueName: \"kubernetes.io/projected/bd24dd7f-8745-4439-82d6-af2eca4e4885-kube-api-access-rh8l6\") pod \"bd24dd7f-8745-4439-82d6-af2eca4e4885\" (UID: \"bd24dd7f-8745-4439-82d6-af2eca4e4885\") " Mar 12 17:22:07 crc kubenswrapper[4687]: I0312 17:22:07.359557 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd24dd7f-8745-4439-82d6-af2eca4e4885-kube-api-access-rh8l6" (OuterVolumeSpecName: "kube-api-access-rh8l6") pod "bd24dd7f-8745-4439-82d6-af2eca4e4885" (UID: "bd24dd7f-8745-4439-82d6-af2eca4e4885"). InnerVolumeSpecName "kube-api-access-rh8l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:22:07 crc kubenswrapper[4687]: I0312 17:22:07.418113 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8l6\" (UniqueName: \"kubernetes.io/projected/bd24dd7f-8745-4439-82d6-af2eca4e4885-kube-api-access-rh8l6\") on node \"crc\" DevicePath \"\"" Mar 12 17:22:08 crc kubenswrapper[4687]: I0312 17:22:08.046975 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555602-cfhzw" Mar 12 17:22:08 crc kubenswrapper[4687]: I0312 17:22:08.246026 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555596-jp8g4"] Mar 12 17:22:08 crc kubenswrapper[4687]: I0312 17:22:08.258047 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555596-jp8g4"] Mar 12 17:22:09 crc kubenswrapper[4687]: I0312 17:22:09.748458 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc" path="/var/lib/kubelet/pods/dec0ae2d-a221-4e48-b4ca-d4fc88b1aacc/volumes" Mar 12 17:22:14 crc kubenswrapper[4687]: I0312 17:22:14.122609 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:22:14 crc kubenswrapper[4687]: I0312 17:22:14.123842 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:22:28 crc kubenswrapper[4687]: I0312 17:22:28.955300 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_281c52ef-94bc-4850-b95a-ca740095f39b/aodh-api/0.log" Mar 12 17:22:29 crc kubenswrapper[4687]: I0312 17:22:29.141467 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_281c52ef-94bc-4850-b95a-ca740095f39b/aodh-evaluator/0.log" Mar 12 17:22:29 crc kubenswrapper[4687]: I0312 17:22:29.173312 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_281c52ef-94bc-4850-b95a-ca740095f39b/aodh-listener/0.log" Mar 12 17:22:29 crc kubenswrapper[4687]: I0312 17:22:29.294946 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_281c52ef-94bc-4850-b95a-ca740095f39b/aodh-notifier/0.log" Mar 12 17:22:29 crc kubenswrapper[4687]: I0312 17:22:29.386240 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6844d7659d-l5x7z_215e5a6b-7a05-4522-8f4c-7e0f82634490/barbican-api/0.log" Mar 12 17:22:29 crc kubenswrapper[4687]: I0312 17:22:29.437320 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6844d7659d-l5x7z_215e5a6b-7a05-4522-8f4c-7e0f82634490/barbican-api-log/0.log" Mar 12 17:22:29 crc kubenswrapper[4687]: I0312 17:22:29.536625 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57d99b55cd-9f9vz_037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c/barbican-keystone-listener/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.401703 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cf7bfd84c-qqlwf_1fb6c2ae-414d-4a45-81f0-4505469a7143/barbican-worker/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.424221 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57d99b55cd-9f9vz_037ea71c-3407-4bb1-8dd8-e0f8e31fcb5c/barbican-keystone-listener-log/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.435441 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cf7bfd84c-qqlwf_1fb6c2ae-414d-4a45-81f0-4505469a7143/barbican-worker-log/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.655883 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-gfzwf_e5e2e312-1292-454f-8b27-6a6a43fe4a1e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.697858 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_135f098d-bdcf-4edd-86b2-b1281313f68c/ceilometer-central-agent/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.846511 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_135f098d-bdcf-4edd-86b2-b1281313f68c/ceilometer-notification-agent/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.855395 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_135f098d-bdcf-4edd-86b2-b1281313f68c/proxy-httpd/0.log" Mar 12 17:22:30 crc kubenswrapper[4687]: I0312 17:22:30.876060 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_135f098d-bdcf-4edd-86b2-b1281313f68c/sg-core/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.098319 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f67753ae-a56b-4974-93e0-70122db7ebde/cinder-api/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.114431 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f67753ae-a56b-4974-93e0-70122db7ebde/cinder-api-log/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.265004 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0ba053ef-190c-4642-ac17-9876798b2390/cinder-scheduler/1.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.340102 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0ba053ef-190c-4642-ac17-9876798b2390/cinder-scheduler/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.413122 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0ba053ef-190c-4642-ac17-9876798b2390/probe/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.496612 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xfqms_c2a0d336-cce5-4c55-a8c7-7d018a0131ed/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.950032 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7sgkb_754c5904-0fe4-408c-bf65-439e420218f8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:31 crc kubenswrapper[4687]: I0312 17:22:31.988291 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-lsjxb_1dd00953-0208-426f-a053-88364a767791/init/0.log" Mar 12 17:22:32 crc kubenswrapper[4687]: I0312 17:22:32.630881 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-lsjxb_1dd00953-0208-426f-a053-88364a767791/init/0.log" Mar 12 17:22:32 crc kubenswrapper[4687]: I0312 17:22:32.654247 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-lsjxb_1dd00953-0208-426f-a053-88364a767791/dnsmasq-dns/0.log" Mar 12 17:22:32 crc kubenswrapper[4687]: I0312 17:22:32.681344 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-g5sm4_c24fada5-9a93-4300-85b9-19da711555dc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:32 crc kubenswrapper[4687]: I0312 17:22:32.986901 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6691ec24-3499-48c7-85f2-1f4ea3327d55/glance-log/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.004755 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6691ec24-3499-48c7-85f2-1f4ea3327d55/glance-httpd/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.187684 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_924199ba-22cc-4b3a-8f1e-8ecf613daac5/glance-log/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.200446 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_924199ba-22cc-4b3a-8f1e-8ecf613daac5/glance-httpd/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.638849 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6cd988bd5d-l6ddg_95929173-9929-400b-be3f-2fee62cbab3d/heat-api/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.775056 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-557tk_3338fca0-a722-4b15-8422-f36e65ad1a2b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.823047 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-b87795674-d968s_667072f7-1d8a-4f67-87bb-f587f6384ffd/heat-engine/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.868615 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-hsdp8_c4aa11c4-93fb-446b-ac1b-c62279d040dd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:33 crc kubenswrapper[4687]: I0312 17:22:33.923845 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-76878d96fd-ck8qd_61cc010a-4cd7-4938-b38e-4af19ead4e50/heat-cfnapi/0.log" Mar 12 17:22:34 crc kubenswrapper[4687]: I0312 17:22:34.189761 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555581-pd64h_e93eb345-861a-4da0-a57e-93775cfb061d/keystone-cron/0.log" Mar 12 17:22:34 crc kubenswrapper[4687]: I0312 17:22:34.315285 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_550e3dcc-162b-4c82-8a8f-81e03e689772/kube-state-metrics/0.log" Mar 12 17:22:34 crc kubenswrapper[4687]: I0312 17:22:34.420408 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-m85zr_092d1dd3-ae4d-4f56-81f7-105d449842f5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:34 crc kubenswrapper[4687]: I0312 17:22:34.538403 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-hh4pt_268b6164-14c8-4663-80be-5b81ea271407/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:34 crc kubenswrapper[4687]: I0312 17:22:34.787791 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_407b0a6d-21bf-462c-88b5-4326f412af6d/mysqld-exporter/0.log" Mar 12 17:22:35 crc kubenswrapper[4687]: I0312 17:22:35.164987 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-874865cfc-trxxb_b10bb84e-71f9-4b10-8a9e-24e05136a576/neutron-httpd/0.log" Mar 12 17:22:35 crc kubenswrapper[4687]: I0312 17:22:35.175601 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-874865cfc-trxxb_b10bb84e-71f9-4b10-8a9e-24e05136a576/neutron-api/0.log" Mar 12 17:22:35 crc kubenswrapper[4687]: I0312 17:22:35.449440 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7h8s5_ee50fab4-f911-4a8c-991d-c8ec9a408352/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.096634 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ad69d278-88f9-4542-9673-664b522fd89c/nova-api-log/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.256953 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_20389fa7-46fb-42a6-a35c-4051648e70ea/nova-cell0-conductor-conductor/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.567701 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ad69d278-88f9-4542-9673-664b522fd89c/nova-api-api/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.627411 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3d74af86-ee5f-4dda-b580-f199b1841b46/nova-cell1-conductor-conductor/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.857887 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_45b04c92-5e49-4aaa-937d-1bbf1339cbfb/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.935500 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-684c74d595-mgzvt_ca335c8b-6106-464d-ae4f-9efbed783816/keystone-api/0.log" Mar 12 17:22:36 crc kubenswrapper[4687]: I0312 17:22:36.941967 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-b762t_f4e8ecd3-b38d-4144-9662-098445ab656b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.135111 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7d138177-7712-4706-8e2c-db53f8914cca/nova-metadata-log/0.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.469411 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a742c8fb-2af2-4192-bf5a-475f472b323a/mysql-bootstrap/0.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.594939 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1bf5ecb2-7178-4f78-8942-f75d813da22a/nova-scheduler-scheduler/0.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.690894 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a742c8fb-2af2-4192-bf5a-475f472b323a/mysql-bootstrap/0.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.727605 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a742c8fb-2af2-4192-bf5a-475f472b323a/galera/1.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.835192 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a742c8fb-2af2-4192-bf5a-475f472b323a/galera/0.log" Mar 12 17:22:37 crc kubenswrapper[4687]: I0312 17:22:37.962716 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2f2ec7e-fcd2-4749-9f00-ffe100081b84/mysql-bootstrap/0.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.228944 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2f2ec7e-fcd2-4749-9f00-ffe100081b84/mysql-bootstrap/0.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.260262 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2f2ec7e-fcd2-4749-9f00-ffe100081b84/galera/0.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.301131 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2f2ec7e-fcd2-4749-9f00-ffe100081b84/galera/1.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.519059 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d39f34d0-cde8-47bd-8bfc-929c8cf9de03/openstackclient/0.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.722736 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9x5hb_7c43d5a9-eafe-4910-acf5-0502509982b3/ovn-controller/0.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.812273 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7d138177-7712-4706-8e2c-db53f8914cca/nova-metadata-metadata/0.log" Mar 12 17:22:38 crc kubenswrapper[4687]: I0312 17:22:38.877302 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-42bmb_12396dd7-4cb8-4e1d-87d5-e03d09c6a01e/openstack-network-exporter/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.034732 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jwnk7_24d69a73-06c7-48b5-9479-7816c969dafc/ovsdb-server-init/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.257665 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jwnk7_24d69a73-06c7-48b5-9479-7816c969dafc/ovsdb-server-init/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.268313 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jwnk7_24d69a73-06c7-48b5-9479-7816c969dafc/ovs-vswitchd/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.298584 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jwnk7_24d69a73-06c7-48b5-9479-7816c969dafc/ovsdb-server/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.513993 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2d359c7e-5a3d-430f-85c4-89dea1de02d7/openstack-network-exporter/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.520883 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qlb4j_de0d3f98-6bf7-432a-ab9c-5be397f44fc2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.527618 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2d359c7e-5a3d-430f-85c4-89dea1de02d7/ovn-northd/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.696638 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_319f1aa4-c9b0-4424-aba2-3f8fa4c36257/openstack-network-exporter/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.787594 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_319f1aa4-c9b0-4424-aba2-3f8fa4c36257/ovsdbserver-nb/0.log" Mar 12 17:22:39 crc kubenswrapper[4687]: I0312 17:22:39.918640 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92d9c0-0e63-4a78-bd77-a79cbc20449b/openstack-network-exporter/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.155279 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92d9c0-0e63-4a78-bd77-a79cbc20449b/ovsdbserver-sb/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.222005 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84c558f4db-5rcnd_8a158c0a-38a3-4fd1-b759-302a9c695434/placement-api/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.324702 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84c558f4db-5rcnd_8a158c0a-38a3-4fd1-b759-302a9c695434/placement-log/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.452518 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1d3c2a08-d60e-4b86-858d-f5ac038f566e/init-config-reloader/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.647610 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1d3c2a08-d60e-4b86-858d-f5ac038f566e/init-config-reloader/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.669549 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1d3c2a08-d60e-4b86-858d-f5ac038f566e/config-reloader/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.718681 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1d3c2a08-d60e-4b86-858d-f5ac038f566e/thanos-sidecar/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.730847 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_1d3c2a08-d60e-4b86-858d-f5ac038f566e/prometheus/0.log" Mar 12 17:22:40 crc kubenswrapper[4687]: I0312 17:22:40.899650 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c4d17b17-5362-40cb-ab0d-d39d96702d69/setup-container/0.log" Mar 12 17:22:41 crc kubenswrapper[4687]: I0312 17:22:41.120691 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c4d17b17-5362-40cb-ab0d-d39d96702d69/rabbitmq/0.log" Mar 12 17:22:41 crc kubenswrapper[4687]: I0312 17:22:41.121617 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c4d17b17-5362-40cb-ab0d-d39d96702d69/setup-container/0.log" Mar 12 17:22:41 crc kubenswrapper[4687]: I0312 17:22:41.991746 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f00fce39-19aa-4b10-9e76-04c114232731/setup-container/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.276113 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f00fce39-19aa-4b10-9e76-04c114232731/setup-container/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.330801 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_8df491ce-9158-4b80-958f-2008e3280c07/setup-container/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.405194 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f00fce39-19aa-4b10-9e76-04c114232731/rabbitmq/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.489012 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_8df491ce-9158-4b80-958f-2008e3280c07/setup-container/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.658643 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_457918a9-182a-4e9e-b03f-ab58128edc95/setup-container/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.727666 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_8df491ce-9158-4b80-958f-2008e3280c07/rabbitmq/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.878337 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_457918a9-182a-4e9e-b03f-ab58128edc95/setup-container/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.949841 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_457918a9-182a-4e9e-b03f-ab58128edc95/rabbitmq/0.log" Mar 12 17:22:42 crc kubenswrapper[4687]: I0312 17:22:42.953486 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wpzvq_ad369604-a912-475e-9904-5e8e4aa03271/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:43 crc kubenswrapper[4687]: I0312 17:22:43.768033 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-cqt44_5606d9d1-3ffd-4420-934b-b0e3c9ac86b7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:43 crc kubenswrapper[4687]: I0312 17:22:43.841861 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-47pkm_e4c32507-463c-4dbc-887f-41d1713ac4c6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:43 crc kubenswrapper[4687]: I0312 17:22:43.982961 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9cbz2_c0fd04af-824b-4f03-a28c-cfa84d27c015/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.121225 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.121285 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.152823 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vhxs9_bdec614b-1196-4d68-afa3-42f42fe2d7dc/ssh-known-hosts-edpm-deployment/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.332946 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b9d7fc5b5-76d88_0bbd130e-9a81-466f-8d89-79c2fa5fdc4c/proxy-server/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.396769 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hm5rd_b1b1f68f-7bdc-4437-922c-d0abc47c639c/swift-ring-rebalance/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.420745 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b9d7fc5b5-76d88_0bbd130e-9a81-466f-8d89-79c2fa5fdc4c/proxy-httpd/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.644888 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/account-auditor/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.686772 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/account-reaper/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.706754 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/account-replicator/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.837066 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/account-server/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.904452 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/container-auditor/0.log" Mar 12 17:22:44 crc kubenswrapper[4687]: I0312 17:22:44.951612 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/container-replicator/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.001671 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/container-server/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.061100 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/container-updater/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.149896 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/object-auditor/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.204428 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/object-expirer/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.321836 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/object-replicator/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.346731 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/object-server/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.429419 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/object-updater/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.476526 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/rsync/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.604977 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_97466c9b-724b-4349-8745-8803b025261a/swift-recon-cron/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.740444 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrqt9_3c549521-57d3-4f63-b447-5700df3e3a47/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:45 crc kubenswrapper[4687]: I0312 17:22:45.885415 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-8ps95_72ce7c15-b60f-47fc-93d8-68d5dda4b9ec/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:46 crc kubenswrapper[4687]: I0312 17:22:46.174994 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78208d69-5957-4700-a77e-3da993e19acd/test-operator-logs-container/0.log" Mar 12 17:22:46 crc kubenswrapper[4687]: I0312 17:22:46.239805 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-stndh_6c07bd33-4fe2-4ac1-8493-0fc93f9698fe/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 17:22:46 crc kubenswrapper[4687]: I0312 17:22:46.434146 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_401d4f9b-896e-4926-91ef-c90b5c38ef83/tempest-tests-tempest-tests-runner/0.log" Mar 12 17:22:47 crc kubenswrapper[4687]: I0312 17:22:47.471170 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e5aa64e0-72c4-4b44-8912-145bd488d369/memcached/0.log" Mar 12 17:22:59 crc kubenswrapper[4687]: I0312 17:22:59.182771 4687 scope.go:117] "RemoveContainer" containerID="52abfbd1b6128a397043d2d54e80519ab9ad7623aa55ba4154e09791747f568d" Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.122289 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.122756 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.122808 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.123829 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40ba0773921c7a67c712c495adb50129bda16d44f774878ddc552da57f9ef650"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.123884 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://40ba0773921c7a67c712c495adb50129bda16d44f774878ddc552da57f9ef650" gracePeriod=600 Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.863991 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="40ba0773921c7a67c712c495adb50129bda16d44f774878ddc552da57f9ef650" exitCode=0 Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.864257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"40ba0773921c7a67c712c495adb50129bda16d44f774878ddc552da57f9ef650"} Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.864504 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e"} Mar 12 17:23:14 crc kubenswrapper[4687]: I0312 17:23:14.864532 4687 scope.go:117] "RemoveContainer" containerID="51fa3622aef5e145988d4dc9d3d7138e7438e1f212471a1240865f3f8264892b" Mar 12 17:23:17 crc kubenswrapper[4687]: I0312 17:23:17.915805 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/util/0.log" Mar 12 17:23:18 crc kubenswrapper[4687]: I0312 17:23:18.065136 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/util/0.log" Mar 12 17:23:18 crc kubenswrapper[4687]: I0312 17:23:18.162278 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/pull/0.log" Mar 12 17:23:18 crc kubenswrapper[4687]: I0312 17:23:18.219500 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/pull/0.log" Mar 12 17:23:18 crc kubenswrapper[4687]: I0312 17:23:18.422140 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/extract/0.log" Mar 12 17:23:18 crc kubenswrapper[4687]: I0312 17:23:18.434882 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/pull/0.log" Mar 12 17:23:18 crc kubenswrapper[4687]: I0312 17:23:18.504417 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b04333b38f552647f5e57b06e1d3f9b2420befdd0a1a1aff970546e123zx4mh_203d759d-0e28-4cd0-b0d3-a65f82035ef6/util/0.log" Mar 12 17:23:20 crc kubenswrapper[4687]: I0312 17:23:20.677070 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-dx9rg_0fc40919-64b1-4b8c-ab92-b9297cb5c352/manager/1.log" Mar 12 17:23:21 crc kubenswrapper[4687]: I0312 17:23:21.178400 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-9rsqr_af6289c5-2a9a-4429-96d6-3c7bbff706e0/manager/1.log" Mar 12 17:23:21 crc kubenswrapper[4687]: I0312 17:23:21.438158 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-9rsqr_af6289c5-2a9a-4429-96d6-3c7bbff706e0/manager/0.log" Mar 12 17:23:21 crc kubenswrapper[4687]: I0312 17:23:21.919672 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-65vx5_63f537cc-6a26-4a05-9b17-80549297e9f2/manager/1.log" Mar 12 17:23:22 crc kubenswrapper[4687]: I0312 17:23:22.291133 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-65vx5_63f537cc-6a26-4a05-9b17-80549297e9f2/manager/0.log" Mar 12 17:23:22 crc kubenswrapper[4687]: I0312 17:23:22.587020 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-dx9rg_0fc40919-64b1-4b8c-ab92-b9297cb5c352/manager/0.log" Mar 12 17:23:22 crc kubenswrapper[4687]: I0312 17:23:22.621205 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-nq67j_64a70e69-432d-4ddc-8eef-e16f4e374c56/manager/1.log" Mar 12 17:23:22 crc kubenswrapper[4687]: I0312 17:23:22.857070 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-d8j97_718e95dd-fb86-4403-8048-d68f1f23d3ca/manager/1.log" Mar 12 17:23:22 crc kubenswrapper[4687]: I0312 17:23:22.871728 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-nq67j_64a70e69-432d-4ddc-8eef-e16f4e374c56/manager/0.log" Mar 12 17:23:23 crc kubenswrapper[4687]: I0312 17:23:23.057868 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-d8j97_718e95dd-fb86-4403-8048-d68f1f23d3ca/manager/0.log" Mar 12 17:23:23 crc kubenswrapper[4687]: I0312 17:23:23.130154 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-7cchw_43d0733d-5a4f-4b51-a95e-eb2cf8593545/manager/1.log" Mar 12 17:23:23 crc kubenswrapper[4687]: I0312 17:23:23.412109 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-8jrnk_7569aa35-67ce-43f4-8e4c-f851973745d9/manager/1.log" Mar 12 17:23:23 crc kubenswrapper[4687]: I0312 17:23:23.685149 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-8jrnk_7569aa35-67ce-43f4-8e4c-f851973745d9/manager/0.log" Mar 12 17:23:23 crc kubenswrapper[4687]: I0312 17:23:23.887929 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-7cchw_43d0733d-5a4f-4b51-a95e-eb2cf8593545/manager/0.log" Mar 12 17:23:23 crc kubenswrapper[4687]: I0312 17:23:23.927206 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-68xmx_79d0c51f-999a-4e39-b6b5-aecf10472a4c/manager/1.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.042208 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-68xmx_79d0c51f-999a-4e39-b6b5-aecf10472a4c/manager/0.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.170716 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-hmn45_1c9c8552-26b0-408f-bd09-40c74041cbfa/manager/1.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.276631 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-hmn45_1c9c8552-26b0-408f-bd09-40c74041cbfa/manager/0.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.402537 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-wsq7q_39ab069c-1ccd-4ad4-b4ea-b71b1b09472f/manager/1.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.614065 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-wsq7q_39ab069c-1ccd-4ad4-b4ea-b71b1b09472f/manager/0.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.640789 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-cds6d_15c585dd-9efa-430b-aeb5-42eaeace0d18/manager/1.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.907099 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-cds6d_15c585dd-9efa-430b-aeb5-42eaeace0d18/manager/0.log" Mar 12 17:23:24 crc kubenswrapper[4687]: I0312 17:23:24.986934 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-d9k4b_90a35858-7aa2-450f-af1f-9686c8be3863/manager/1.log" Mar 12 17:23:25 crc kubenswrapper[4687]: I0312 17:23:25.247262 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-kjnx8_a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7/manager/1.log" Mar 12 17:23:25 crc kubenswrapper[4687]: I0312 17:23:25.281174 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-d9k4b_90a35858-7aa2-450f-af1f-9686c8be3863/manager/0.log" Mar 12 17:23:25 crc kubenswrapper[4687]: I0312 17:23:25.352350 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-kjnx8_a58dc7d0-2c48-4894-bdfd-079e0a8b6fb7/manager/0.log" Mar 12 17:23:25 crc kubenswrapper[4687]: I0312 17:23:25.546205 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz_dc5ebdf2-e54a-4c66-abb7-35039f9226dc/manager/0.log" Mar 12 17:23:25 crc kubenswrapper[4687]: I0312 17:23:25.588028 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7jt4hz_dc5ebdf2-e54a-4c66-abb7-35039f9226dc/manager/1.log" Mar 12 17:23:25 crc kubenswrapper[4687]: I0312 17:23:25.838303 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-568548c879-hsj4g_43fc4c76-a11e-4403-81b5-ee741b3c2a63/operator/1.log" Mar 12 17:23:26 crc kubenswrapper[4687]: I0312 17:23:26.126430 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-568548c879-hsj4g_43fc4c76-a11e-4403-81b5-ee741b3c2a63/operator/0.log" Mar 12 17:23:26 crc kubenswrapper[4687]: I0312 17:23:26.419508 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-dbdf4d967-glnf2_9af65423-8d26-4ff5-97ee-711dc0c4501b/manager/1.log" Mar 12 17:23:27 crc kubenswrapper[4687]: I0312 17:23:27.120552 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l6tqh_dd381e8d-4f1d-48ef-b8a7-b10f6c97b334/registry-server/1.log" Mar 12 17:23:27 crc kubenswrapper[4687]: I0312 17:23:27.157691 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l6tqh_dd381e8d-4f1d-48ef-b8a7-b10f6c97b334/registry-server/0.log" Mar 12 17:23:27 crc kubenswrapper[4687]: I0312 17:23:27.371928 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-str9n_1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0/manager/1.log" Mar 12 17:23:27 crc kubenswrapper[4687]: I0312 17:23:27.640997 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-str9n_1bdeaf7b-d36f-466c-9dd2-4f8a5330dfd0/manager/0.log" Mar 12 17:23:27 crc kubenswrapper[4687]: I0312 17:23:27.806696 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-4rpv5_ecc97932-9eae-4d08-910b-b68e0e7d8002/manager/1.log" Mar 12 17:23:27 crc kubenswrapper[4687]: I0312 17:23:27.947268 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-4rpv5_ecc97932-9eae-4d08-910b-b68e0e7d8002/manager/0.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.253348 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-g4z6z_bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2/operator/1.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.260653 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-vqhcv_26adb4e9-0197-4023-b876-afbb572f93d8/manager/1.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.414054 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-g4z6z_bc24e0b5-b9e5-44a3-b36c-dad06da1c2e2/operator/0.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.520976 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-v98w4_65afd209-a452-442f-853d-d2e062fa2530/manager/1.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.670341 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-v98w4_65afd209-a452-442f-853d-d2e062fa2530/manager/0.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.745401 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-dbdf4d967-glnf2_9af65423-8d26-4ff5-97ee-711dc0c4501b/manager/0.log" Mar 12 17:23:28 crc kubenswrapper[4687]: I0312 17:23:28.797742 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-748fccb5bd-pgbh8_066b8087-d58d-4c75-a4bb-4a4b26710855/manager/1.log" Mar 12 17:23:29 crc kubenswrapper[4687]: I0312 17:23:29.577076 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-ft9mp_616298f1-0baf-428d-9bb9-3a87f52085e8/manager/0.log" Mar 12 17:23:29 crc kubenswrapper[4687]: I0312 17:23:29.664457 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-ft9mp_616298f1-0baf-428d-9bb9-3a87f52085e8/manager/1.log" Mar 12 17:23:29 crc kubenswrapper[4687]: I0312 17:23:29.933716 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-hngln_85d59f34-51a3-4c41-836e-9cc32f5da5e4/manager/0.log" Mar 12 17:23:29 crc kubenswrapper[4687]: I0312 17:23:29.968432 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-hngln_85d59f34-51a3-4c41-836e-9cc32f5da5e4/manager/1.log" Mar 12 17:23:29 crc kubenswrapper[4687]: I0312 17:23:29.972519 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-748fccb5bd-pgbh8_066b8087-d58d-4c75-a4bb-4a4b26710855/manager/0.log" Mar 12 17:23:33 crc kubenswrapper[4687]: I0312 17:23:33.527805 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-vqhcv_26adb4e9-0197-4023-b876-afbb572f93d8/manager/0.log" Mar 12 17:23:54 crc kubenswrapper[4687]: I0312 17:23:54.268428 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g4cls_fd93502e-13cb-47b4-b70e-1fcaacc70ca1/control-plane-machine-set-operator/0.log" Mar 12 17:23:54 crc kubenswrapper[4687]: I0312 17:23:54.459124 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tvj7q_e075a2cc-590a-4ac7-a6d5-c02336912013/kube-rbac-proxy/0.log" Mar 12 17:23:54 crc kubenswrapper[4687]: I0312 17:23:54.532171 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tvj7q_e075a2cc-590a-4ac7-a6d5-c02336912013/machine-api-operator/0.log" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.145978 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555604-cxb7p"] Mar 12 17:24:00 crc kubenswrapper[4687]: E0312 17:24:00.147035 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd24dd7f-8745-4439-82d6-af2eca4e4885" containerName="oc" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.147049 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd24dd7f-8745-4439-82d6-af2eca4e4885" containerName="oc" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.147297 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd24dd7f-8745-4439-82d6-af2eca4e4885" containerName="oc" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.148129 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.155000 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.155740 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.161828 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555604-cxb7p"] Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.162587 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.219661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bg6d\" (UniqueName: \"kubernetes.io/projected/1a64d491-fb3a-432a-a02b-ee9a4abc96dd-kube-api-access-9bg6d\") pod \"auto-csr-approver-29555604-cxb7p\" (UID: \"1a64d491-fb3a-432a-a02b-ee9a4abc96dd\") " pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.322065 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bg6d\" (UniqueName: \"kubernetes.io/projected/1a64d491-fb3a-432a-a02b-ee9a4abc96dd-kube-api-access-9bg6d\") pod \"auto-csr-approver-29555604-cxb7p\" (UID: \"1a64d491-fb3a-432a-a02b-ee9a4abc96dd\") " pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.761621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bg6d\" (UniqueName: \"kubernetes.io/projected/1a64d491-fb3a-432a-a02b-ee9a4abc96dd-kube-api-access-9bg6d\") pod \"auto-csr-approver-29555604-cxb7p\" (UID: \"1a64d491-fb3a-432a-a02b-ee9a4abc96dd\") " pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:00 crc kubenswrapper[4687]: I0312 17:24:00.777839 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:01 crc kubenswrapper[4687]: W0312 17:24:01.314611 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a64d491_fb3a_432a_a02b_ee9a4abc96dd.slice/crio-02771b0692d3cc1313190086d8c1a7b7dd739d5ca8e53e846c62187e9bfa501f WatchSource:0}: Error finding container 02771b0692d3cc1313190086d8c1a7b7dd739d5ca8e53e846c62187e9bfa501f: Status 404 returned error can't find the container with id 02771b0692d3cc1313190086d8c1a7b7dd739d5ca8e53e846c62187e9bfa501f Mar 12 17:24:01 crc kubenswrapper[4687]: I0312 17:24:01.318851 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555604-cxb7p"] Mar 12 17:24:01 crc kubenswrapper[4687]: I0312 17:24:01.421432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" event={"ID":"1a64d491-fb3a-432a-a02b-ee9a4abc96dd","Type":"ContainerStarted","Data":"02771b0692d3cc1313190086d8c1a7b7dd739d5ca8e53e846c62187e9bfa501f"} Mar 12 17:24:03 crc kubenswrapper[4687]: I0312 17:24:03.446655 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" event={"ID":"1a64d491-fb3a-432a-a02b-ee9a4abc96dd","Type":"ContainerStarted","Data":"12504da925e887ec22304f5e95529431f37217a6461300acb20b92f113134997"} Mar 12 17:24:03 crc kubenswrapper[4687]: I0312 17:24:03.467956 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" podStartSLOduration=2.253294478 podStartE2EDuration="3.467932271s" podCreationTimestamp="2026-03-12 17:24:00 +0000 UTC" firstStartedPulling="2026-03-12 17:24:01.317303663 +0000 UTC m=+4890.281266007" lastFinishedPulling="2026-03-12 17:24:02.531941456 +0000 UTC m=+4891.495903800" observedRunningTime="2026-03-12 17:24:03.459355877 +0000 UTC m=+4892.423318241" watchObservedRunningTime="2026-03-12 17:24:03.467932271 +0000 UTC m=+4892.431894615" Mar 12 17:24:04 crc kubenswrapper[4687]: I0312 17:24:04.459214 4687 generic.go:334] "Generic (PLEG): container finished" podID="1a64d491-fb3a-432a-a02b-ee9a4abc96dd" containerID="12504da925e887ec22304f5e95529431f37217a6461300acb20b92f113134997" exitCode=0 Mar 12 17:24:04 crc kubenswrapper[4687]: I0312 17:24:04.459318 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" event={"ID":"1a64d491-fb3a-432a-a02b-ee9a4abc96dd","Type":"ContainerDied","Data":"12504da925e887ec22304f5e95529431f37217a6461300acb20b92f113134997"} Mar 12 17:24:05 crc kubenswrapper[4687]: I0312 17:24:05.918337 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:05 crc kubenswrapper[4687]: I0312 17:24:05.983330 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bg6d\" (UniqueName: \"kubernetes.io/projected/1a64d491-fb3a-432a-a02b-ee9a4abc96dd-kube-api-access-9bg6d\") pod \"1a64d491-fb3a-432a-a02b-ee9a4abc96dd\" (UID: \"1a64d491-fb3a-432a-a02b-ee9a4abc96dd\") " Mar 12 17:24:05 crc kubenswrapper[4687]: I0312 17:24:05.990085 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a64d491-fb3a-432a-a02b-ee9a4abc96dd-kube-api-access-9bg6d" (OuterVolumeSpecName: "kube-api-access-9bg6d") pod "1a64d491-fb3a-432a-a02b-ee9a4abc96dd" (UID: "1a64d491-fb3a-432a-a02b-ee9a4abc96dd"). InnerVolumeSpecName "kube-api-access-9bg6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:24:06 crc kubenswrapper[4687]: I0312 17:24:06.087150 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bg6d\" (UniqueName: \"kubernetes.io/projected/1a64d491-fb3a-432a-a02b-ee9a4abc96dd-kube-api-access-9bg6d\") on node \"crc\" DevicePath \"\"" Mar 12 17:24:06 crc kubenswrapper[4687]: I0312 17:24:06.495268 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" event={"ID":"1a64d491-fb3a-432a-a02b-ee9a4abc96dd","Type":"ContainerDied","Data":"02771b0692d3cc1313190086d8c1a7b7dd739d5ca8e53e846c62187e9bfa501f"} Mar 12 17:24:06 crc kubenswrapper[4687]: I0312 17:24:06.495308 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02771b0692d3cc1313190086d8c1a7b7dd739d5ca8e53e846c62187e9bfa501f" Mar 12 17:24:06 crc kubenswrapper[4687]: I0312 17:24:06.495858 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555604-cxb7p" Mar 12 17:24:06 crc kubenswrapper[4687]: I0312 17:24:06.580095 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555598-w5bf7"] Mar 12 17:24:06 crc kubenswrapper[4687]: I0312 17:24:06.592561 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555598-w5bf7"] Mar 12 17:24:07 crc kubenswrapper[4687]: I0312 17:24:07.751491 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51edd34d-2c01-4595-ae73-242d69a16c19" path="/var/lib/kubelet/pods/51edd34d-2c01-4595-ae73-242d69a16c19/volumes" Mar 12 17:24:10 crc kubenswrapper[4687]: I0312 17:24:10.997201 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-6qgpr_13f9cddd-9bcc-4408-9cda-9dbb8f1f5cc7/cert-manager-controller/0.log" Mar 12 17:24:11 crc kubenswrapper[4687]: I0312 17:24:11.287590 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-km6xl_b16711e0-4d1c-4545-8399-acbb7e248fe8/cert-manager-cainjector/0.log" Mar 12 17:24:11 crc kubenswrapper[4687]: I0312 17:24:11.318552 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4dcdp_f46caff8-15ce-49be-97d0-08e60d937972/cert-manager-webhook/0.log" Mar 12 17:24:27 crc kubenswrapper[4687]: I0312 17:24:27.320220 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-sw8zc_0dbc8317-dcca-4eeb-a5c7-ec72be4a0278/nmstate-console-plugin/0.log" Mar 12 17:24:27 crc kubenswrapper[4687]: I0312 17:24:27.538994 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pjvcq_40b2acd2-7fab-41ca-9ba5-7f8a5dc50606/nmstate-handler/0.log" Mar 12 17:24:27 crc kubenswrapper[4687]: I0312 17:24:27.588851 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-7b5ps_3e0f8548-f03f-4f9e-a422-13093d87d32e/nmstate-metrics/0.log" Mar 12 17:24:27 crc kubenswrapper[4687]: I0312 17:24:27.598926 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-7b5ps_3e0f8548-f03f-4f9e-a422-13093d87d32e/kube-rbac-proxy/0.log" Mar 12 17:24:27 crc kubenswrapper[4687]: I0312 17:24:27.818185 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-jp88k_97a0494c-2509-4e76-afd9-fd2be9482d5d/nmstate-webhook/0.log" Mar 12 17:24:27 crc kubenswrapper[4687]: I0312 17:24:27.837652 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-fz9qq_7e98880b-25b5-4e73-a4aa-aa70c426dc07/nmstate-operator/0.log" Mar 12 17:24:43 crc kubenswrapper[4687]: I0312 17:24:43.262493 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-56bfd9f789-6lvcv_4568a6b6-c008-4ea7-abec-b824324732d3/kube-rbac-proxy/0.log" Mar 12 17:24:43 crc kubenswrapper[4687]: I0312 17:24:43.363199 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-56bfd9f789-6lvcv_4568a6b6-c008-4ea7-abec-b824324732d3/manager/1.log" Mar 12 17:24:43 crc kubenswrapper[4687]: I0312 17:24:43.493908 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-56bfd9f789-6lvcv_4568a6b6-c008-4ea7-abec-b824324732d3/manager/0.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.213227 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sb8w2_f97d4a45-34b0-4192-a7cb-05d23d1b614d/prometheus-operator/0.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.301003 4687 scope.go:117] "RemoveContainer" containerID="eb7d23cfea9f33dd11a304601ad37afb86021dedc3f59324bc28165388898bcb" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.391576 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_9e487d62-c210-4aa0-b5a2-371bcf18cad5/prometheus-operator-admission-webhook/0.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.446820 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e/prometheus-operator-admission-webhook/0.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.627227 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tbpsw_afacf716-028a-4848-a495-83f7c01a47ca/operator/1.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.652204 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tbpsw_afacf716-028a-4848-a495-83f7c01a47ca/operator/0.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.702683 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-mtdkz_3a1437be-06a2-43c0-9ae4-6be8a822e466/observability-ui-dashboards/0.log" Mar 12 17:24:59 crc kubenswrapper[4687]: I0312 17:24:59.885156 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-z25z9_98e16c84-ec9c-482a-8962-ce13556ffd74/perses-operator/0.log" Mar 12 17:25:14 crc kubenswrapper[4687]: I0312 17:25:14.122004 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:25:14 crc kubenswrapper[4687]: I0312 17:25:14.122525 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:25:15 crc kubenswrapper[4687]: I0312 17:25:15.889708 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-66689c4bbf-rb2sl_c59704b5-c215-4306-9e1e-05ee0bfc055e/cluster-logging-operator/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.135678 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-d6qd2_2cc2c239-dfdc-448a-ad41-67de823ec204/collector/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.146827 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_f017a70e-cb13-441a-a70c-0809569c1c52/loki-compactor/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.345917 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-9c6b6d984-8k59p_33969389-2dd2-4c4b-ae70-d6e71f0fdf14/loki-distributor/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.378144 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-54f8b9b48b-87lbf_7b24457c-bd41-4df3-95a1-10b69540a4af/gateway/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.460113 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-54f8b9b48b-87lbf_7b24457c-bd41-4df3-95a1-10b69540a4af/opa/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.579592 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-54f8b9b48b-cfshk_75324694-6fad-4d26-8415-9d7f55ab5c1d/gateway/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.592296 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-54f8b9b48b-cfshk_75324694-6fad-4d26-8415-9d7f55ab5c1d/opa/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.817063 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_f56100a9-2dec-4a46-a619-6922b78f7e16/loki-index-gateway/0.log" Mar 12 17:25:16 crc kubenswrapper[4687]: I0312 17:25:16.843784 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_2dc7c2f7-1478-4385-be1a-a2257e4dc2d3/loki-ingester/0.log" Mar 12 17:25:17 crc kubenswrapper[4687]: I0312 17:25:17.001109 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-6dcbdf8bb8-6zsz9_bc2003f0-4f8e-4e59-8a1a-dd7be452b232/loki-querier/0.log" Mar 12 17:25:17 crc kubenswrapper[4687]: I0312 17:25:17.049631 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-ff66c4dc9-bxc6s_ac17b136-46f1-4129-a22f-bcd3baaf7813/loki-query-frontend/0.log" Mar 12 17:25:33 crc kubenswrapper[4687]: I0312 17:25:33.362796 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-hq4lb_80778679-d1d9-4307-990d-7e79bf7ce3f3/controller/1.log" Mar 12 17:25:33 crc kubenswrapper[4687]: I0312 17:25:33.451759 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-hq4lb_80778679-d1d9-4307-990d-7e79bf7ce3f3/controller/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.070830 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-frr-files/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.096506 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-hq4lb_80778679-d1d9-4307-990d-7e79bf7ce3f3/kube-rbac-proxy/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.329342 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-frr-files/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.354741 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-reloader/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.362454 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-metrics/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.399677 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-reloader/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.580180 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-frr-files/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.601346 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-metrics/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.669786 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-reloader/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.674541 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-metrics/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.901167 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-frr-files/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.929595 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-reloader/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.929629 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/cp-metrics/0.log" Mar 12 17:25:34 crc kubenswrapper[4687]: I0312 17:25:34.944811 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/controller/1.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.101745 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/controller/0.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.167598 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/frr-metrics/0.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.243804 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/frr/1.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.417467 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/kube-rbac-proxy-frr/0.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.417688 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/kube-rbac-proxy/0.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.561227 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/reloader/0.log" Mar 12 17:25:35 crc kubenswrapper[4687]: I0312 17:25:35.690424 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-qgv9f_89dd944c-557b-4060-914f-c5287ed954bb/frr-k8s-webhook-server/0.log" Mar 12 17:25:36 crc kubenswrapper[4687]: I0312 17:25:36.066242 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-546598f745-bbcrq_0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a/manager/1.log" Mar 12 17:25:36 crc kubenswrapper[4687]: I0312 17:25:36.173562 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-546598f745-bbcrq_0b46daa4-d0c2-4bc3-a314-ad1c1ad7eb7a/manager/0.log" Mar 12 17:25:36 crc kubenswrapper[4687]: I0312 17:25:36.417012 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b584c959d-dtlbw_69e1152d-3280-4ab7-81dd-dc83f0daa3dc/webhook-server/1.log" Mar 12 17:25:36 crc kubenswrapper[4687]: I0312 17:25:36.463968 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b584c959d-dtlbw_69e1152d-3280-4ab7-81dd-dc83f0daa3dc/webhook-server/0.log" Mar 12 17:25:36 crc kubenswrapper[4687]: I0312 17:25:36.775130 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jrgss_00c4362c-6a07-47c7-a60a-bbaf5b9f0260/kube-rbac-proxy/0.log" Mar 12 17:25:37 crc kubenswrapper[4687]: I0312 17:25:37.075714 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xd8n_1003c23c-a0cb-4878-8399-d7b435084227/frr/0.log" Mar 12 17:25:37 crc kubenswrapper[4687]: I0312 17:25:37.151603 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jrgss_00c4362c-6a07-47c7-a60a-bbaf5b9f0260/speaker/1.log" Mar 12 17:25:37 crc kubenswrapper[4687]: I0312 17:25:37.435160 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jrgss_00c4362c-6a07-47c7-a60a-bbaf5b9f0260/speaker/0.log" Mar 12 17:25:44 crc kubenswrapper[4687]: I0312 17:25:44.121652 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:25:44 crc kubenswrapper[4687]: I0312 17:25:44.123113 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:25:54 crc kubenswrapper[4687]: I0312 17:25:54.769614 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/util/0.log" Mar 12 17:25:54 crc kubenswrapper[4687]: I0312 17:25:54.964662 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/util/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.021643 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/pull/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.065307 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/pull/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.210079 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/pull/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.218292 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/util/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.255259 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874jwcxt_2ac13e69-6be5-4b89-9a5d-9c535b368b5e/extract/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.431035 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/util/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.654569 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/util/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.695815 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/pull/0.log" Mar 12 17:25:55 crc kubenswrapper[4687]: I0312 17:25:55.698295 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/pull/0.log" Mar 12 17:25:56 crc kubenswrapper[4687]: I0312 17:25:56.682624 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/util/0.log" Mar 12 17:25:56 crc kubenswrapper[4687]: I0312 17:25:56.708292 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/extract/0.log" Mar 12 17:25:56 crc kubenswrapper[4687]: I0312 17:25:56.738419 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ggk64_1d8d85c9-7b46-4030-8930-662b9b0012a3/pull/0.log" Mar 12 17:25:56 crc kubenswrapper[4687]: I0312 17:25:56.925473 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/util/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.164159 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/pull/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.174150 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/util/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.175910 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/pull/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.377429 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/util/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.413868 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/pull/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.429084 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19kzrfg_7193d683-0bda-494e-87bd-79a506c1ec30/extract/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.611549 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/util/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.855285 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/util/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.874648 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/pull/0.log" Mar 12 17:25:57 crc kubenswrapper[4687]: I0312 17:25:57.946557 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/pull/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.108041 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/util/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.181097 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/pull/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.200921 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d5gmt58_cff33c98-5d1d-4726-adde-df3333665efa/extract/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.220475 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/util/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.424136 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/util/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.437129 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/pull/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.440258 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/pull/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.632026 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/pull/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.643060 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/util/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.649166 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6ctcthw_a1d956c5-a568-4aff-ab9c-0f64eda22177/extract/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.688211 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/util/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.884431 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/util/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.906588 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/pull/0.log" Mar 12 17:25:58 crc kubenswrapper[4687]: I0312 17:25:58.931866 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/pull/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.081225 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/util/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.089009 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/extract/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.091793 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rrq2p_7dffa69e-cac6-4faf-9808-ea2216e95faa/pull/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.206615 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/extract-utilities/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.373217 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/extract-content/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.379230 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/extract-content/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.383060 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/extract-utilities/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.572037 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/extract-utilities/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.633499 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/extract-content/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.690713 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/extract-utilities/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.893093 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/extract-content/0.log" Mar 12 17:25:59 crc kubenswrapper[4687]: I0312 17:25:59.935194 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/extract-utilities/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.045147 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/extract-content/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.164035 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555606-gbx7n"] Mar 12 17:26:00 crc kubenswrapper[4687]: E0312 17:26:00.164646 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a64d491-fb3a-432a-a02b-ee9a4abc96dd" containerName="oc" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.164673 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a64d491-fb3a-432a-a02b-ee9a4abc96dd" containerName="oc" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.165025 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a64d491-fb3a-432a-a02b-ee9a4abc96dd" containerName="oc" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.166096 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.170298 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.170547 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.170654 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.198321 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555606-gbx7n"] Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.216585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jk5n\" (UniqueName: \"kubernetes.io/projected/96eb9717-bb38-4f57-aa14-f7001a0d94d9-kube-api-access-9jk5n\") pod \"auto-csr-approver-29555606-gbx7n\" (UID: \"96eb9717-bb38-4f57-aa14-f7001a0d94d9\") " pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.219502 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/extract-content/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.276471 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/extract-utilities/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.318434 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jk5n\" (UniqueName: \"kubernetes.io/projected/96eb9717-bb38-4f57-aa14-f7001a0d94d9-kube-api-access-9jk5n\") pod \"auto-csr-approver-29555606-gbx7n\" (UID: \"96eb9717-bb38-4f57-aa14-f7001a0d94d9\") " pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.367624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jk5n\" (UniqueName: \"kubernetes.io/projected/96eb9717-bb38-4f57-aa14-f7001a0d94d9-kube-api-access-9jk5n\") pod \"auto-csr-approver-29555606-gbx7n\" (UID: \"96eb9717-bb38-4f57-aa14-f7001a0d94d9\") " pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.491700 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.514203 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/util/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.722066 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z2wpn_dea71408-c307-4443-a026-547ca7196ff6/registry-server/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.796550 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/util/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.852488 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/pull/0.log" Mar 12 17:26:00 crc kubenswrapper[4687]: I0312 17:26:00.875128 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/pull/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.059209 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555606-gbx7n"] Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.070172 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.108034 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-69p8v_ec8dfd8e-ec34-4ec7-a2cd-bfc3233a8095/registry-server/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.149531 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/util/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.157801 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/pull/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.210644 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989tl2bt_5ea334fc-44e4-4a2b-9470-e5fa2ae01911/extract/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.319567 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kbtm9_a1680cce-a286-460f-9e3f-145d9b364995/marketplace-operator/1.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.408139 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kbtm9_a1680cce-a286-460f-9e3f-145d9b364995/marketplace-operator/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.501240 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/extract-utilities/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.640882 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/extract-content/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.665858 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/extract-utilities/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.671537 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/extract-content/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.833040 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/extract-content/0.log" Mar 12 17:26:01 crc kubenswrapper[4687]: I0312 17:26:01.863864 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/extract-utilities/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.017533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" event={"ID":"96eb9717-bb38-4f57-aa14-f7001a0d94d9","Type":"ContainerStarted","Data":"9af884d6651728f7f1fe9ec535857489717f6cc10f0546e592859fe06edc9ab7"} Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.061931 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g9hdt_50316ae5-82e3-4dbc-ba50-dd2046abc0e1/registry-server/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.177957 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/extract-utilities/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.433992 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/extract-content/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.457473 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/extract-utilities/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.487100 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/extract-content/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.678592 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/extract-content/0.log" Mar 12 17:26:02 crc kubenswrapper[4687]: I0312 17:26:02.728536 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/extract-utilities/0.log" Mar 12 17:26:03 crc kubenswrapper[4687]: I0312 17:26:03.399828 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbgkx_5d2daa11-2756-4a7c-860a-44c13ab92d91/registry-server/0.log" Mar 12 17:26:04 crc kubenswrapper[4687]: I0312 17:26:04.042327 4687 generic.go:334] "Generic (PLEG): container finished" podID="96eb9717-bb38-4f57-aa14-f7001a0d94d9" containerID="280fd34adf458c7380ab119d4a5a1a7dc30c42565ebabb5b7c4c58244585f2d7" exitCode=0 Mar 12 17:26:04 crc kubenswrapper[4687]: I0312 17:26:04.042396 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" event={"ID":"96eb9717-bb38-4f57-aa14-f7001a0d94d9","Type":"ContainerDied","Data":"280fd34adf458c7380ab119d4a5a1a7dc30c42565ebabb5b7c4c58244585f2d7"} Mar 12 17:26:05 crc kubenswrapper[4687]: I0312 17:26:05.496242 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:05 crc kubenswrapper[4687]: I0312 17:26:05.650470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jk5n\" (UniqueName: \"kubernetes.io/projected/96eb9717-bb38-4f57-aa14-f7001a0d94d9-kube-api-access-9jk5n\") pod \"96eb9717-bb38-4f57-aa14-f7001a0d94d9\" (UID: \"96eb9717-bb38-4f57-aa14-f7001a0d94d9\") " Mar 12 17:26:05 crc kubenswrapper[4687]: I0312 17:26:05.677541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96eb9717-bb38-4f57-aa14-f7001a0d94d9-kube-api-access-9jk5n" (OuterVolumeSpecName: "kube-api-access-9jk5n") pod "96eb9717-bb38-4f57-aa14-f7001a0d94d9" (UID: "96eb9717-bb38-4f57-aa14-f7001a0d94d9"). InnerVolumeSpecName "kube-api-access-9jk5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:26:05 crc kubenswrapper[4687]: I0312 17:26:05.752959 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jk5n\" (UniqueName: \"kubernetes.io/projected/96eb9717-bb38-4f57-aa14-f7001a0d94d9-kube-api-access-9jk5n\") on node \"crc\" DevicePath \"\"" Mar 12 17:26:06 crc kubenswrapper[4687]: I0312 17:26:06.065762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" event={"ID":"96eb9717-bb38-4f57-aa14-f7001a0d94d9","Type":"ContainerDied","Data":"9af884d6651728f7f1fe9ec535857489717f6cc10f0546e592859fe06edc9ab7"} Mar 12 17:26:06 crc kubenswrapper[4687]: I0312 17:26:06.065981 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af884d6651728f7f1fe9ec535857489717f6cc10f0546e592859fe06edc9ab7" Mar 12 17:26:06 crc kubenswrapper[4687]: I0312 17:26:06.066001 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555606-gbx7n" Mar 12 17:26:06 crc kubenswrapper[4687]: I0312 17:26:06.576454 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555600-c69vv"] Mar 12 17:26:06 crc kubenswrapper[4687]: I0312 17:26:06.587518 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555600-c69vv"] Mar 12 17:26:07 crc kubenswrapper[4687]: I0312 17:26:07.762837 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d22889-e03c-4ae1-affd-9829014a347e" path="/var/lib/kubelet/pods/50d22889-e03c-4ae1-affd-9829014a347e/volumes" Mar 12 17:26:14 crc kubenswrapper[4687]: I0312 17:26:14.121731 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:26:14 crc kubenswrapper[4687]: I0312 17:26:14.122245 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:26:14 crc kubenswrapper[4687]: I0312 17:26:14.122287 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:26:14 crc kubenswrapper[4687]: I0312 17:26:14.123284 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:26:14 crc kubenswrapper[4687]: I0312 17:26:14.123343 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" gracePeriod=600 Mar 12 17:26:14 crc kubenswrapper[4687]: E0312 17:26:14.256780 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:26:15 crc kubenswrapper[4687]: I0312 17:26:15.202376 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" exitCode=0 Mar 12 17:26:15 crc kubenswrapper[4687]: I0312 17:26:15.202397 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e"} Mar 12 17:26:15 crc kubenswrapper[4687]: I0312 17:26:15.202465 4687 scope.go:117] "RemoveContainer" containerID="40ba0773921c7a67c712c495adb50129bda16d44f774878ddc552da57f9ef650" Mar 12 17:26:15 crc kubenswrapper[4687]: I0312 17:26:15.203421 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:26:15 crc kubenswrapper[4687]: E0312 17:26:15.203998 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:26:19 crc kubenswrapper[4687]: I0312 17:26:19.731020 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sb8w2_f97d4a45-34b0-4192-a7cb-05d23d1b614d/prometheus-operator/0.log" Mar 12 17:26:19 crc kubenswrapper[4687]: I0312 17:26:19.806494 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-59dcbff5b4-dqnw7_9e487d62-c210-4aa0-b5a2-371bcf18cad5/prometheus-operator-admission-webhook/0.log" Mar 12 17:26:19 crc kubenswrapper[4687]: I0312 17:26:19.808110 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-59dcbff5b4-qz4rn_b2fe1907-83dc-446c-b1e2-b2aa2c63dd6e/prometheus-operator-admission-webhook/0.log" Mar 12 17:26:19 crc kubenswrapper[4687]: I0312 17:26:19.938648 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tbpsw_afacf716-028a-4848-a495-83f7c01a47ca/operator/1.log" Mar 12 17:26:20 crc kubenswrapper[4687]: I0312 17:26:20.021480 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-mtdkz_3a1437be-06a2-43c0-9ae4-6be8a822e466/observability-ui-dashboards/0.log" Mar 12 17:26:20 crc kubenswrapper[4687]: I0312 17:26:20.067506 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tbpsw_afacf716-028a-4848-a495-83f7c01a47ca/operator/0.log" Mar 12 17:26:20 crc kubenswrapper[4687]: I0312 17:26:20.183175 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-z25z9_98e16c84-ec9c-482a-8962-ce13556ffd74/perses-operator/0.log" Mar 12 17:26:26 crc kubenswrapper[4687]: I0312 17:26:26.734429 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:26:26 crc kubenswrapper[4687]: E0312 17:26:26.735636 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:26:36 crc kubenswrapper[4687]: I0312 17:26:36.670202 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-56bfd9f789-6lvcv_4568a6b6-c008-4ea7-abec-b824324732d3/kube-rbac-proxy/0.log" Mar 12 17:26:36 crc kubenswrapper[4687]: I0312 17:26:36.719648 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-56bfd9f789-6lvcv_4568a6b6-c008-4ea7-abec-b824324732d3/manager/1.log" Mar 12 17:26:36 crc kubenswrapper[4687]: I0312 17:26:36.857241 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-56bfd9f789-6lvcv_4568a6b6-c008-4ea7-abec-b824324732d3/manager/0.log" Mar 12 17:26:39 crc kubenswrapper[4687]: I0312 17:26:39.733608 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:26:39 crc kubenswrapper[4687]: E0312 17:26:39.734594 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:26:53 crc kubenswrapper[4687]: I0312 17:26:53.738949 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:26:53 crc kubenswrapper[4687]: E0312 17:26:53.739947 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:26:59 crc kubenswrapper[4687]: I0312 17:26:59.427309 4687 scope.go:117] "RemoveContainer" containerID="c45c339ec6a42fdf1a8b2396946c4dc158ebd86656d16c0f6fecf2c93f578a9f" Mar 12 17:26:59 crc kubenswrapper[4687]: I0312 17:26:59.469294 4687 scope.go:117] "RemoveContainer" containerID="b02695143f83d70687c03d39710e035bb660098fe8fcfc0782035bee45e8832f" Mar 12 17:27:06 crc kubenswrapper[4687]: I0312 17:27:06.732911 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:27:06 crc kubenswrapper[4687]: E0312 17:27:06.733728 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:27:11 crc kubenswrapper[4687]: E0312 17:27:11.891628 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.38:54836->38.102.83.38:46595: write tcp 38.102.83.38:54836->38.102.83.38:46595: write: broken pipe Mar 12 17:27:19 crc kubenswrapper[4687]: I0312 17:27:19.733052 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:27:19 crc kubenswrapper[4687]: E0312 17:27:19.733908 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:27:34 crc kubenswrapper[4687]: I0312 17:27:34.734022 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:27:34 crc kubenswrapper[4687]: E0312 17:27:34.734703 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:27:46 crc kubenswrapper[4687]: I0312 17:27:46.733447 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:27:46 crc kubenswrapper[4687]: E0312 17:27:46.734545 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:27:58 crc kubenswrapper[4687]: I0312 17:27:58.734697 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:27:58 crc kubenswrapper[4687]: E0312 17:27:58.735565 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.170147 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555608-v6vfr"] Mar 12 17:28:00 crc kubenswrapper[4687]: E0312 17:28:00.171752 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96eb9717-bb38-4f57-aa14-f7001a0d94d9" containerName="oc" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.171772 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96eb9717-bb38-4f57-aa14-f7001a0d94d9" containerName="oc" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.172704 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="96eb9717-bb38-4f57-aa14-f7001a0d94d9" containerName="oc" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.183932 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.188409 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.188441 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.188839 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.213030 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555608-v6vfr"] Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.334227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fh5\" (UniqueName: \"kubernetes.io/projected/01e2e938-d63e-4c9c-afdf-664d4812f648-kube-api-access-44fh5\") pod \"auto-csr-approver-29555608-v6vfr\" (UID: \"01e2e938-d63e-4c9c-afdf-664d4812f648\") " pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.436422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fh5\" (UniqueName: \"kubernetes.io/projected/01e2e938-d63e-4c9c-afdf-664d4812f648-kube-api-access-44fh5\") pod \"auto-csr-approver-29555608-v6vfr\" (UID: \"01e2e938-d63e-4c9c-afdf-664d4812f648\") " pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.457280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fh5\" (UniqueName: \"kubernetes.io/projected/01e2e938-d63e-4c9c-afdf-664d4812f648-kube-api-access-44fh5\") pod \"auto-csr-approver-29555608-v6vfr\" (UID: \"01e2e938-d63e-4c9c-afdf-664d4812f648\") " pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:00 crc kubenswrapper[4687]: I0312 17:28:00.536935 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:02 crc kubenswrapper[4687]: I0312 17:28:02.207793 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555608-v6vfr"] Mar 12 17:28:02 crc kubenswrapper[4687]: I0312 17:28:02.635772 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" event={"ID":"01e2e938-d63e-4c9c-afdf-664d4812f648","Type":"ContainerStarted","Data":"5b762b4298fe1247c8f5ade91375d79ccd5ba189dc9920f1fecfe6e636174431"} Mar 12 17:28:06 crc kubenswrapper[4687]: I0312 17:28:06.683717 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" event={"ID":"01e2e938-d63e-4c9c-afdf-664d4812f648","Type":"ContainerStarted","Data":"9940c458bdb82a61e5e5710c2bd29192b0bc7419e3a26c2ad5d784a2a3ffc80f"} Mar 12 17:28:08 crc kubenswrapper[4687]: I0312 17:28:08.707850 4687 generic.go:334] "Generic (PLEG): container finished" podID="01e2e938-d63e-4c9c-afdf-664d4812f648" containerID="9940c458bdb82a61e5e5710c2bd29192b0bc7419e3a26c2ad5d784a2a3ffc80f" exitCode=0 Mar 12 17:28:08 crc kubenswrapper[4687]: I0312 17:28:08.707876 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" event={"ID":"01e2e938-d63e-4c9c-afdf-664d4812f648","Type":"ContainerDied","Data":"9940c458bdb82a61e5e5710c2bd29192b0bc7419e3a26c2ad5d784a2a3ffc80f"} Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.190716 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.269546 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44fh5\" (UniqueName: \"kubernetes.io/projected/01e2e938-d63e-4c9c-afdf-664d4812f648-kube-api-access-44fh5\") pod \"01e2e938-d63e-4c9c-afdf-664d4812f648\" (UID: \"01e2e938-d63e-4c9c-afdf-664d4812f648\") " Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.283392 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e2e938-d63e-4c9c-afdf-664d4812f648-kube-api-access-44fh5" (OuterVolumeSpecName: "kube-api-access-44fh5") pod "01e2e938-d63e-4c9c-afdf-664d4812f648" (UID: "01e2e938-d63e-4c9c-afdf-664d4812f648"). InnerVolumeSpecName "kube-api-access-44fh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.372969 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44fh5\" (UniqueName: \"kubernetes.io/projected/01e2e938-d63e-4c9c-afdf-664d4812f648-kube-api-access-44fh5\") on node \"crc\" DevicePath \"\"" Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.737618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" event={"ID":"01e2e938-d63e-4c9c-afdf-664d4812f648","Type":"ContainerDied","Data":"5b762b4298fe1247c8f5ade91375d79ccd5ba189dc9920f1fecfe6e636174431"} Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.737689 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b762b4298fe1247c8f5ade91375d79ccd5ba189dc9920f1fecfe6e636174431" Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.737785 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555608-v6vfr" Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.824724 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555602-cfhzw"] Mar 12 17:28:10 crc kubenswrapper[4687]: I0312 17:28:10.839462 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555602-cfhzw"] Mar 12 17:28:11 crc kubenswrapper[4687]: I0312 17:28:11.748349 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd24dd7f-8745-4439-82d6-af2eca4e4885" path="/var/lib/kubelet/pods/bd24dd7f-8745-4439-82d6-af2eca4e4885/volumes" Mar 12 17:28:12 crc kubenswrapper[4687]: I0312 17:28:12.732607 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:28:12 crc kubenswrapper[4687]: E0312 17:28:12.733189 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:28:24 crc kubenswrapper[4687]: I0312 17:28:24.735299 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:28:24 crc kubenswrapper[4687]: E0312 17:28:24.736241 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:28:37 crc kubenswrapper[4687]: I0312 17:28:37.733814 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:28:37 crc kubenswrapper[4687]: E0312 17:28:37.735453 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.264903 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rhvqr"] Mar 12 17:28:40 crc kubenswrapper[4687]: E0312 17:28:40.265806 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e2e938-d63e-4c9c-afdf-664d4812f648" containerName="oc" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.265821 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e2e938-d63e-4c9c-afdf-664d4812f648" containerName="oc" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.266315 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e2e938-d63e-4c9c-afdf-664d4812f648" containerName="oc" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.271694 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.281584 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhvqr"] Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.421606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-utilities\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.421706 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwcb\" (UniqueName: \"kubernetes.io/projected/5b38245c-7e66-475b-8e23-4c2bfb52e498-kube-api-access-7gwcb\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.421740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-catalog-content\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.523822 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-utilities\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.523950 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwcb\" (UniqueName: \"kubernetes.io/projected/5b38245c-7e66-475b-8e23-4c2bfb52e498-kube-api-access-7gwcb\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.523998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-catalog-content\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.524477 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-utilities\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:40 crc kubenswrapper[4687]: I0312 17:28:40.524562 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-catalog-content\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:41 crc kubenswrapper[4687]: I0312 17:28:41.061730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwcb\" (UniqueName: \"kubernetes.io/projected/5b38245c-7e66-475b-8e23-4c2bfb52e498-kube-api-access-7gwcb\") pod \"redhat-marketplace-rhvqr\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:41 crc kubenswrapper[4687]: I0312 17:28:41.240572 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:41 crc kubenswrapper[4687]: I0312 17:28:41.751597 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhvqr"] Mar 12 17:28:42 crc kubenswrapper[4687]: I0312 17:28:42.155711 4687 generic.go:334] "Generic (PLEG): container finished" podID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerID="f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9" exitCode=0 Mar 12 17:28:42 crc kubenswrapper[4687]: I0312 17:28:42.155755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerDied","Data":"f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9"} Mar 12 17:28:42 crc kubenswrapper[4687]: I0312 17:28:42.156032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerStarted","Data":"45dc53244da53625d9d5cc062620ed9588e43ded7beea07611379bb352d09756"} Mar 12 17:28:44 crc kubenswrapper[4687]: I0312 17:28:44.184328 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerStarted","Data":"81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8"} Mar 12 17:28:45 crc kubenswrapper[4687]: I0312 17:28:45.195439 4687 generic.go:334] "Generic (PLEG): container finished" podID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerID="81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8" exitCode=0 Mar 12 17:28:45 crc kubenswrapper[4687]: I0312 17:28:45.198725 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerDied","Data":"81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8"} Mar 12 17:28:46 crc kubenswrapper[4687]: I0312 17:28:46.208993 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerStarted","Data":"15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c"} Mar 12 17:28:46 crc kubenswrapper[4687]: I0312 17:28:46.241462 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rhvqr" podStartSLOduration=2.6589137000000003 podStartE2EDuration="6.241439823s" podCreationTimestamp="2026-03-12 17:28:40 +0000 UTC" firstStartedPulling="2026-03-12 17:28:42.158325574 +0000 UTC m=+5171.122287918" lastFinishedPulling="2026-03-12 17:28:45.740851697 +0000 UTC m=+5174.704814041" observedRunningTime="2026-03-12 17:28:46.237493435 +0000 UTC m=+5175.201455889" watchObservedRunningTime="2026-03-12 17:28:46.241439823 +0000 UTC m=+5175.205402167" Mar 12 17:28:49 crc kubenswrapper[4687]: I0312 17:28:49.244333 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerID="fbfdaa462e0c0c4efca4510a7a405fbe3af7cb26dfd54d8bdcec3b5275c8092c" exitCode=0 Mar 12 17:28:49 crc kubenswrapper[4687]: I0312 17:28:49.244692 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-65ct5/must-gather-h4b7p" event={"ID":"dd1bdda3-0007-425e-b879-ba23a52e4a1a","Type":"ContainerDied","Data":"fbfdaa462e0c0c4efca4510a7a405fbe3af7cb26dfd54d8bdcec3b5275c8092c"} Mar 12 17:28:49 crc kubenswrapper[4687]: I0312 17:28:49.245469 4687 scope.go:117] "RemoveContainer" containerID="fbfdaa462e0c0c4efca4510a7a405fbe3af7cb26dfd54d8bdcec3b5275c8092c" Mar 12 17:28:49 crc kubenswrapper[4687]: I0312 17:28:49.345203 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-65ct5_must-gather-h4b7p_dd1bdda3-0007-425e-b879-ba23a52e4a1a/gather/0.log" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.241748 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.242064 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.297135 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.357011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.368931 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vvmhq"] Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.372485 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.394284 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvmhq"] Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.514777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6ng\" (UniqueName: \"kubernetes.io/projected/ee320280-6c35-4495-9ed3-4ba7e5550b86-kube-api-access-kl6ng\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.515133 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-catalog-content\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.515241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-utilities\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.618114 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl6ng\" (UniqueName: \"kubernetes.io/projected/ee320280-6c35-4495-9ed3-4ba7e5550b86-kube-api-access-kl6ng\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.618243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-catalog-content\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.618322 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-utilities\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.619118 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-catalog-content\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.619532 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-utilities\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.639470 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl6ng\" (UniqueName: \"kubernetes.io/projected/ee320280-6c35-4495-9ed3-4ba7e5550b86-kube-api-access-kl6ng\") pod \"community-operators-vvmhq\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.723444 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.741235 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:28:51 crc kubenswrapper[4687]: E0312 17:28:51.741553 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:28:51 crc kubenswrapper[4687]: I0312 17:28:51.924960 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhvqr"] Mar 12 17:28:52 crc kubenswrapper[4687]: I0312 17:28:52.223151 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvmhq"] Mar 12 17:28:52 crc kubenswrapper[4687]: I0312 17:28:52.289266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerStarted","Data":"23b509e76677eec9edd67468a31742aa595eade0d435cfb5cf66ee1938083679"} Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.302652 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerID="8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc" exitCode=0 Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.302701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerDied","Data":"8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc"} Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.303059 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rhvqr" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="registry-server" containerID="cri-o://15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c" gracePeriod=2 Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.918003 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.979017 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-catalog-content\") pod \"5b38245c-7e66-475b-8e23-4c2bfb52e498\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.979250 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gwcb\" (UniqueName: \"kubernetes.io/projected/5b38245c-7e66-475b-8e23-4c2bfb52e498-kube-api-access-7gwcb\") pod \"5b38245c-7e66-475b-8e23-4c2bfb52e498\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.979326 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-utilities\") pod \"5b38245c-7e66-475b-8e23-4c2bfb52e498\" (UID: \"5b38245c-7e66-475b-8e23-4c2bfb52e498\") " Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.981039 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-utilities" (OuterVolumeSpecName: "utilities") pod "5b38245c-7e66-475b-8e23-4c2bfb52e498" (UID: "5b38245c-7e66-475b-8e23-4c2bfb52e498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:28:53 crc kubenswrapper[4687]: I0312 17:28:53.987255 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b38245c-7e66-475b-8e23-4c2bfb52e498-kube-api-access-7gwcb" (OuterVolumeSpecName: "kube-api-access-7gwcb") pod "5b38245c-7e66-475b-8e23-4c2bfb52e498" (UID: "5b38245c-7e66-475b-8e23-4c2bfb52e498"). InnerVolumeSpecName "kube-api-access-7gwcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.004360 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b38245c-7e66-475b-8e23-4c2bfb52e498" (UID: "5b38245c-7e66-475b-8e23-4c2bfb52e498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.081862 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gwcb\" (UniqueName: \"kubernetes.io/projected/5b38245c-7e66-475b-8e23-4c2bfb52e498-kube-api-access-7gwcb\") on node \"crc\" DevicePath \"\"" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.081896 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.081906 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b38245c-7e66-475b-8e23-4c2bfb52e498-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.319649 4687 generic.go:334] "Generic (PLEG): container finished" podID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerID="15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c" exitCode=0 Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.319770 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhvqr" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.319770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerDied","Data":"15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c"} Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.320321 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhvqr" event={"ID":"5b38245c-7e66-475b-8e23-4c2bfb52e498","Type":"ContainerDied","Data":"45dc53244da53625d9d5cc062620ed9588e43ded7beea07611379bb352d09756"} Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.320385 4687 scope.go:117] "RemoveContainer" containerID="15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.325342 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerStarted","Data":"84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4"} Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.353477 4687 scope.go:117] "RemoveContainer" containerID="81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.384139 4687 scope.go:117] "RemoveContainer" containerID="f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.400648 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhvqr"] Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.412556 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhvqr"] Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.413207 4687 scope.go:117] "RemoveContainer" containerID="15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c" Mar 12 17:28:54 crc kubenswrapper[4687]: E0312 17:28:54.414530 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c\": container with ID starting with 15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c not found: ID does not exist" containerID="15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.414574 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c"} err="failed to get container status \"15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c\": rpc error: code = NotFound desc = could not find container \"15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c\": container with ID starting with 15ee235e80e5d9d5398a8fd8d0cc0f9c7d78e73b12e1cd130bfae662bf37a72c not found: ID does not exist" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.414599 4687 scope.go:117] "RemoveContainer" containerID="81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8" Mar 12 17:28:54 crc kubenswrapper[4687]: E0312 17:28:54.415153 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8\": container with ID starting with 81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8 not found: ID does not exist" containerID="81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.415182 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8"} err="failed to get container status \"81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8\": rpc error: code = NotFound desc = could not find container \"81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8\": container with ID starting with 81ee8a5e8669e969df8e65cf9132308f09336c7b992d25f7bfdcb0aee04309a8 not found: ID does not exist" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.415200 4687 scope.go:117] "RemoveContainer" containerID="f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9" Mar 12 17:28:54 crc kubenswrapper[4687]: E0312 17:28:54.415570 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9\": container with ID starting with f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9 not found: ID does not exist" containerID="f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9" Mar 12 17:28:54 crc kubenswrapper[4687]: I0312 17:28:54.415597 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9"} err="failed to get container status \"f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9\": rpc error: code = NotFound desc = could not find container \"f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9\": container with ID starting with f74a3205719831aa4ed5b90bb68ab28108fc973358223386e599c96fe08ba9a9 not found: ID does not exist" Mar 12 17:28:55 crc kubenswrapper[4687]: I0312 17:28:55.747180 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" path="/var/lib/kubelet/pods/5b38245c-7e66-475b-8e23-4c2bfb52e498/volumes" Mar 12 17:28:56 crc kubenswrapper[4687]: I0312 17:28:56.357876 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerID="84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4" exitCode=0 Mar 12 17:28:56 crc kubenswrapper[4687]: I0312 17:28:56.357949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerDied","Data":"84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4"} Mar 12 17:28:57 crc kubenswrapper[4687]: I0312 17:28:57.377432 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerStarted","Data":"99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700"} Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.100760 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vvmhq" podStartSLOduration=4.521460224 podStartE2EDuration="8.100736357s" podCreationTimestamp="2026-03-12 17:28:51 +0000 UTC" firstStartedPulling="2026-03-12 17:28:53.304765061 +0000 UTC m=+5182.268727405" lastFinishedPulling="2026-03-12 17:28:56.884041194 +0000 UTC m=+5185.848003538" observedRunningTime="2026-03-12 17:28:57.397626317 +0000 UTC m=+5186.361588671" watchObservedRunningTime="2026-03-12 17:28:59.100736357 +0000 UTC m=+5188.064698701" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.104608 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-65ct5/must-gather-h4b7p"] Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.104999 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-65ct5/must-gather-h4b7p" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="copy" containerID="cri-o://bb3ccb253e07bfdeb35c1df40908707dfe7753b92dfcdebca32514e5bd401d91" gracePeriod=2 Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.124788 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-65ct5/must-gather-h4b7p"] Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.426218 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-65ct5_must-gather-h4b7p_dd1bdda3-0007-425e-b879-ba23a52e4a1a/copy/0.log" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.430677 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerID="bb3ccb253e07bfdeb35c1df40908707dfe7753b92dfcdebca32514e5bd401d91" exitCode=143 Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.680368 4687 scope.go:117] "RemoveContainer" containerID="2900e6e94760a6f17dbef2b1f260fd91883c009b219513d170459546079872cb" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.693477 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-65ct5_must-gather-h4b7p_dd1bdda3-0007-425e-b879-ba23a52e4a1a/copy/0.log" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.693948 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.757044 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd1bdda3-0007-425e-b879-ba23a52e4a1a-must-gather-output\") pod \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.757289 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r42p\" (UniqueName: \"kubernetes.io/projected/dd1bdda3-0007-425e-b879-ba23a52e4a1a-kube-api-access-8r42p\") pod \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\" (UID: \"dd1bdda3-0007-425e-b879-ba23a52e4a1a\") " Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.767049 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1bdda3-0007-425e-b879-ba23a52e4a1a-kube-api-access-8r42p" (OuterVolumeSpecName: "kube-api-access-8r42p") pod "dd1bdda3-0007-425e-b879-ba23a52e4a1a" (UID: "dd1bdda3-0007-425e-b879-ba23a52e4a1a"). InnerVolumeSpecName "kube-api-access-8r42p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.865843 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r42p\" (UniqueName: \"kubernetes.io/projected/dd1bdda3-0007-425e-b879-ba23a52e4a1a-kube-api-access-8r42p\") on node \"crc\" DevicePath \"\"" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.923012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd1bdda3-0007-425e-b879-ba23a52e4a1a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dd1bdda3-0007-425e-b879-ba23a52e4a1a" (UID: "dd1bdda3-0007-425e-b879-ba23a52e4a1a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:28:59 crc kubenswrapper[4687]: I0312 17:28:59.968627 4687 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd1bdda3-0007-425e-b879-ba23a52e4a1a-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 12 17:29:00 crc kubenswrapper[4687]: I0312 17:29:00.444858 4687 scope.go:117] "RemoveContainer" containerID="bb3ccb253e07bfdeb35c1df40908707dfe7753b92dfcdebca32514e5bd401d91" Mar 12 17:29:00 crc kubenswrapper[4687]: I0312 17:29:00.444920 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-65ct5/must-gather-h4b7p" Mar 12 17:29:00 crc kubenswrapper[4687]: I0312 17:29:00.467308 4687 scope.go:117] "RemoveContainer" containerID="fbfdaa462e0c0c4efca4510a7a405fbe3af7cb26dfd54d8bdcec3b5275c8092c" Mar 12 17:29:01 crc kubenswrapper[4687]: I0312 17:29:01.724886 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:29:01 crc kubenswrapper[4687]: I0312 17:29:01.725149 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:29:01 crc kubenswrapper[4687]: I0312 17:29:01.745508 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" path="/var/lib/kubelet/pods/dd1bdda3-0007-425e-b879-ba23a52e4a1a/volumes" Mar 12 17:29:01 crc kubenswrapper[4687]: I0312 17:29:01.778569 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:29:02 crc kubenswrapper[4687]: I0312 17:29:02.523232 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:29:02 crc kubenswrapper[4687]: I0312 17:29:02.603906 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvmhq"] Mar 12 17:29:04 crc kubenswrapper[4687]: I0312 17:29:04.499587 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vvmhq" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="registry-server" containerID="cri-o://99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700" gracePeriod=2 Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.050140 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.221535 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-catalog-content\") pod \"ee320280-6c35-4495-9ed3-4ba7e5550b86\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.221709 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-utilities\") pod \"ee320280-6c35-4495-9ed3-4ba7e5550b86\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.221778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl6ng\" (UniqueName: \"kubernetes.io/projected/ee320280-6c35-4495-9ed3-4ba7e5550b86-kube-api-access-kl6ng\") pod \"ee320280-6c35-4495-9ed3-4ba7e5550b86\" (UID: \"ee320280-6c35-4495-9ed3-4ba7e5550b86\") " Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.225411 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-utilities" (OuterVolumeSpecName: "utilities") pod "ee320280-6c35-4495-9ed3-4ba7e5550b86" (UID: "ee320280-6c35-4495-9ed3-4ba7e5550b86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.239530 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee320280-6c35-4495-9ed3-4ba7e5550b86-kube-api-access-kl6ng" (OuterVolumeSpecName: "kube-api-access-kl6ng") pod "ee320280-6c35-4495-9ed3-4ba7e5550b86" (UID: "ee320280-6c35-4495-9ed3-4ba7e5550b86"). InnerVolumeSpecName "kube-api-access-kl6ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.290646 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee320280-6c35-4495-9ed3-4ba7e5550b86" (UID: "ee320280-6c35-4495-9ed3-4ba7e5550b86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.326193 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl6ng\" (UniqueName: \"kubernetes.io/projected/ee320280-6c35-4495-9ed3-4ba7e5550b86-kube-api-access-kl6ng\") on node \"crc\" DevicePath \"\"" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.326397 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.326501 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee320280-6c35-4495-9ed3-4ba7e5550b86-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.517944 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerID="99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700" exitCode=0 Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.517990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerDied","Data":"99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700"} Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.518018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvmhq" event={"ID":"ee320280-6c35-4495-9ed3-4ba7e5550b86","Type":"ContainerDied","Data":"23b509e76677eec9edd67468a31742aa595eade0d435cfb5cf66ee1938083679"} Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.518035 4687 scope.go:117] "RemoveContainer" containerID="99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.518111 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvmhq" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.572616 4687 scope.go:117] "RemoveContainer" containerID="84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.581218 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvmhq"] Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.606317 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vvmhq"] Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.638868 4687 scope.go:117] "RemoveContainer" containerID="8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.675311 4687 scope.go:117] "RemoveContainer" containerID="99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700" Mar 12 17:29:05 crc kubenswrapper[4687]: E0312 17:29:05.676582 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700\": container with ID starting with 99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700 not found: ID does not exist" containerID="99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.676770 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700"} err="failed to get container status \"99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700\": rpc error: code = NotFound desc = could not find container \"99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700\": container with ID starting with 99ede0322588304ebc7eca45f4401799cb9079f79a600601186852b81f182700 not found: ID does not exist" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.676799 4687 scope.go:117] "RemoveContainer" containerID="84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4" Mar 12 17:29:05 crc kubenswrapper[4687]: E0312 17:29:05.677132 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4\": container with ID starting with 84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4 not found: ID does not exist" containerID="84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.677157 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4"} err="failed to get container status \"84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4\": rpc error: code = NotFound desc = could not find container \"84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4\": container with ID starting with 84b76173ed2e68e9b5ef0f30e8e58c50cdcd63355355764aa2d97c341b1b95f4 not found: ID does not exist" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.677172 4687 scope.go:117] "RemoveContainer" containerID="8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc" Mar 12 17:29:05 crc kubenswrapper[4687]: E0312 17:29:05.677388 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc\": container with ID starting with 8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc not found: ID does not exist" containerID="8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.677416 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc"} err="failed to get container status \"8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc\": rpc error: code = NotFound desc = could not find container \"8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc\": container with ID starting with 8803cf1adf28e216ace145088ffea406f9622284bdd3348ae838a4ee1169d2bc not found: ID does not exist" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.733965 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:29:05 crc kubenswrapper[4687]: E0312 17:29:05.734252 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:29:05 crc kubenswrapper[4687]: I0312 17:29:05.749803 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" path="/var/lib/kubelet/pods/ee320280-6c35-4495-9ed3-4ba7e5550b86/volumes" Mar 12 17:29:16 crc kubenswrapper[4687]: I0312 17:29:16.738335 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:29:16 crc kubenswrapper[4687]: E0312 17:29:16.740055 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:29:31 crc kubenswrapper[4687]: I0312 17:29:31.747227 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:29:31 crc kubenswrapper[4687]: E0312 17:29:31.748241 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:29:45 crc kubenswrapper[4687]: I0312 17:29:45.732899 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:29:45 crc kubenswrapper[4687]: E0312 17:29:45.734048 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:29:59 crc kubenswrapper[4687]: I0312 17:29:59.733251 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:29:59 crc kubenswrapper[4687]: E0312 17:29:59.734132 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.155950 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555610-wb2kc"] Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156543 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="gather" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156565 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="gather" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156601 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="extract-content" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156608 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="extract-content" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156616 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="extract-content" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156622 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="extract-content" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156635 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="extract-utilities" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156641 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="extract-utilities" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156650 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="registry-server" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156657 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="registry-server" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156670 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="registry-server" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156676 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="registry-server" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156700 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="copy" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156707 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="copy" Mar 12 17:30:00 crc kubenswrapper[4687]: E0312 17:30:00.156738 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="extract-utilities" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.156747 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="extract-utilities" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.157017 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="gather" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.157033 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee320280-6c35-4495-9ed3-4ba7e5550b86" containerName="registry-server" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.157084 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b38245c-7e66-475b-8e23-4c2bfb52e498" containerName="registry-server" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.157100 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1bdda3-0007-425e-b879-ba23a52e4a1a" containerName="copy" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.158154 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.160351 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.161521 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.163814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.176571 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg"] Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.178092 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.184344 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.184406 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.192115 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555610-wb2kc"] Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.223839 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg"] Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.249483 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9702413d-18d6-4914-9133-503ad30e264e-secret-volume\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.249599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gch9m\" (UniqueName: \"kubernetes.io/projected/f1ddaed4-cab5-42c8-9580-16e3cc14a279-kube-api-access-gch9m\") pod \"auto-csr-approver-29555610-wb2kc\" (UID: \"f1ddaed4-cab5-42c8-9580-16e3cc14a279\") " pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.249688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9702413d-18d6-4914-9133-503ad30e264e-config-volume\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.249913 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql869\" (UniqueName: \"kubernetes.io/projected/9702413d-18d6-4914-9133-503ad30e264e-kube-api-access-ql869\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.351675 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9702413d-18d6-4914-9133-503ad30e264e-secret-volume\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.352042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gch9m\" (UniqueName: \"kubernetes.io/projected/f1ddaed4-cab5-42c8-9580-16e3cc14a279-kube-api-access-gch9m\") pod \"auto-csr-approver-29555610-wb2kc\" (UID: \"f1ddaed4-cab5-42c8-9580-16e3cc14a279\") " pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.352146 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9702413d-18d6-4914-9133-503ad30e264e-config-volume\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.352314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql869\" (UniqueName: \"kubernetes.io/projected/9702413d-18d6-4914-9133-503ad30e264e-kube-api-access-ql869\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.353318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9702413d-18d6-4914-9133-503ad30e264e-config-volume\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.361545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9702413d-18d6-4914-9133-503ad30e264e-secret-volume\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.372789 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql869\" (UniqueName: \"kubernetes.io/projected/9702413d-18d6-4914-9133-503ad30e264e-kube-api-access-ql869\") pod \"collect-profiles-29555610-gl4dg\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.374984 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gch9m\" (UniqueName: \"kubernetes.io/projected/f1ddaed4-cab5-42c8-9580-16e3cc14a279-kube-api-access-gch9m\") pod \"auto-csr-approver-29555610-wb2kc\" (UID: \"f1ddaed4-cab5-42c8-9580-16e3cc14a279\") " pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.482187 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:00 crc kubenswrapper[4687]: I0312 17:30:00.499318 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:01 crc kubenswrapper[4687]: I0312 17:30:01.049620 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg"] Mar 12 17:30:01 crc kubenswrapper[4687]: I0312 17:30:01.160799 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555610-wb2kc"] Mar 12 17:30:01 crc kubenswrapper[4687]: W0312 17:30:01.172151 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1ddaed4_cab5_42c8_9580_16e3cc14a279.slice/crio-48e18696b715e3cfbac86585b39a61d5bf0aed9f39728c31739a0621a486beb7 WatchSource:0}: Error finding container 48e18696b715e3cfbac86585b39a61d5bf0aed9f39728c31739a0621a486beb7: Status 404 returned error can't find the container with id 48e18696b715e3cfbac86585b39a61d5bf0aed9f39728c31739a0621a486beb7 Mar 12 17:30:01 crc kubenswrapper[4687]: I0312 17:30:01.328024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" event={"ID":"9702413d-18d6-4914-9133-503ad30e264e","Type":"ContainerStarted","Data":"ba9187203ec142b31890a5cbfd90dc8a3c28bb6044520f44e38def112a943162"} Mar 12 17:30:01 crc kubenswrapper[4687]: I0312 17:30:01.328077 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" event={"ID":"9702413d-18d6-4914-9133-503ad30e264e","Type":"ContainerStarted","Data":"55949f0893a8f341ef5d3e0245533b6c25d67d95a6c3e808f8f582de615e62d8"} Mar 12 17:30:01 crc kubenswrapper[4687]: I0312 17:30:01.330582 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" event={"ID":"f1ddaed4-cab5-42c8-9580-16e3cc14a279","Type":"ContainerStarted","Data":"48e18696b715e3cfbac86585b39a61d5bf0aed9f39728c31739a0621a486beb7"} Mar 12 17:30:01 crc kubenswrapper[4687]: I0312 17:30:01.350803 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" podStartSLOduration=1.350785427 podStartE2EDuration="1.350785427s" podCreationTimestamp="2026-03-12 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 17:30:01.34174969 +0000 UTC m=+5250.305712034" watchObservedRunningTime="2026-03-12 17:30:01.350785427 +0000 UTC m=+5250.314747771" Mar 12 17:30:02 crc kubenswrapper[4687]: I0312 17:30:02.342716 4687 generic.go:334] "Generic (PLEG): container finished" podID="9702413d-18d6-4914-9133-503ad30e264e" containerID="ba9187203ec142b31890a5cbfd90dc8a3c28bb6044520f44e38def112a943162" exitCode=0 Mar 12 17:30:02 crc kubenswrapper[4687]: I0312 17:30:02.342830 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" event={"ID":"9702413d-18d6-4914-9133-503ad30e264e","Type":"ContainerDied","Data":"ba9187203ec142b31890a5cbfd90dc8a3c28bb6044520f44e38def112a943162"} Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.852819 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.981668 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9702413d-18d6-4914-9133-503ad30e264e-config-volume\") pod \"9702413d-18d6-4914-9133-503ad30e264e\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.981774 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9702413d-18d6-4914-9133-503ad30e264e-secret-volume\") pod \"9702413d-18d6-4914-9133-503ad30e264e\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.981854 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql869\" (UniqueName: \"kubernetes.io/projected/9702413d-18d6-4914-9133-503ad30e264e-kube-api-access-ql869\") pod \"9702413d-18d6-4914-9133-503ad30e264e\" (UID: \"9702413d-18d6-4914-9133-503ad30e264e\") " Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.982366 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9702413d-18d6-4914-9133-503ad30e264e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9702413d-18d6-4914-9133-503ad30e264e" (UID: "9702413d-18d6-4914-9133-503ad30e264e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.982689 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9702413d-18d6-4914-9133-503ad30e264e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.987284 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9702413d-18d6-4914-9133-503ad30e264e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9702413d-18d6-4914-9133-503ad30e264e" (UID: "9702413d-18d6-4914-9133-503ad30e264e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 17:30:03 crc kubenswrapper[4687]: I0312 17:30:03.987740 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9702413d-18d6-4914-9133-503ad30e264e-kube-api-access-ql869" (OuterVolumeSpecName: "kube-api-access-ql869") pod "9702413d-18d6-4914-9133-503ad30e264e" (UID: "9702413d-18d6-4914-9133-503ad30e264e"). InnerVolumeSpecName "kube-api-access-ql869". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.084889 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9702413d-18d6-4914-9133-503ad30e264e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.084920 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql869\" (UniqueName: \"kubernetes.io/projected/9702413d-18d6-4914-9133-503ad30e264e-kube-api-access-ql869\") on node \"crc\" DevicePath \"\"" Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.393783 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.394426 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555610-gl4dg" event={"ID":"9702413d-18d6-4914-9133-503ad30e264e","Type":"ContainerDied","Data":"55949f0893a8f341ef5d3e0245533b6c25d67d95a6c3e808f8f582de615e62d8"} Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.394528 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55949f0893a8f341ef5d3e0245533b6c25d67d95a6c3e808f8f582de615e62d8" Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.398171 4687 generic.go:334] "Generic (PLEG): container finished" podID="f1ddaed4-cab5-42c8-9580-16e3cc14a279" containerID="dfdcc755d44f9d6dbd8390593275921a68d5e68d47d48b77c09d4fd19f67c867" exitCode=0 Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.398334 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" event={"ID":"f1ddaed4-cab5-42c8-9580-16e3cc14a279","Type":"ContainerDied","Data":"dfdcc755d44f9d6dbd8390593275921a68d5e68d47d48b77c09d4fd19f67c867"} Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.448280 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j"] Mar 12 17:30:04 crc kubenswrapper[4687]: I0312 17:30:04.463624 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555565-ztn4j"] Mar 12 17:30:05 crc kubenswrapper[4687]: I0312 17:30:05.758198 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a56ae63-1736-499e-b424-a3a0f8936561" path="/var/lib/kubelet/pods/5a56ae63-1736-499e-b424-a3a0f8936561/volumes" Mar 12 17:30:05 crc kubenswrapper[4687]: I0312 17:30:05.921485 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.041287 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gch9m\" (UniqueName: \"kubernetes.io/projected/f1ddaed4-cab5-42c8-9580-16e3cc14a279-kube-api-access-gch9m\") pod \"f1ddaed4-cab5-42c8-9580-16e3cc14a279\" (UID: \"f1ddaed4-cab5-42c8-9580-16e3cc14a279\") " Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.046719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ddaed4-cab5-42c8-9580-16e3cc14a279-kube-api-access-gch9m" (OuterVolumeSpecName: "kube-api-access-gch9m") pod "f1ddaed4-cab5-42c8-9580-16e3cc14a279" (UID: "f1ddaed4-cab5-42c8-9580-16e3cc14a279"). InnerVolumeSpecName "kube-api-access-gch9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.145125 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gch9m\" (UniqueName: \"kubernetes.io/projected/f1ddaed4-cab5-42c8-9580-16e3cc14a279-kube-api-access-gch9m\") on node \"crc\" DevicePath \"\"" Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.432249 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" event={"ID":"f1ddaed4-cab5-42c8-9580-16e3cc14a279","Type":"ContainerDied","Data":"48e18696b715e3cfbac86585b39a61d5bf0aed9f39728c31739a0621a486beb7"} Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.432301 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e18696b715e3cfbac86585b39a61d5bf0aed9f39728c31739a0621a486beb7" Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.432355 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555610-wb2kc" Mar 12 17:30:06 crc kubenswrapper[4687]: I0312 17:30:06.989226 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555604-cxb7p"] Mar 12 17:30:07 crc kubenswrapper[4687]: I0312 17:30:07.006272 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555604-cxb7p"] Mar 12 17:30:07 crc kubenswrapper[4687]: I0312 17:30:07.747078 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a64d491-fb3a-432a-a02b-ee9a4abc96dd" path="/var/lib/kubelet/pods/1a64d491-fb3a-432a-a02b-ee9a4abc96dd/volumes" Mar 12 17:30:10 crc kubenswrapper[4687]: I0312 17:30:10.733298 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:30:10 crc kubenswrapper[4687]: E0312 17:30:10.734198 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:30:21 crc kubenswrapper[4687]: I0312 17:30:21.733959 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:30:21 crc kubenswrapper[4687]: E0312 17:30:21.737060 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:30:35 crc kubenswrapper[4687]: I0312 17:30:35.733432 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:30:35 crc kubenswrapper[4687]: E0312 17:30:35.734537 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:30:46 crc kubenswrapper[4687]: I0312 17:30:46.733860 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:30:46 crc kubenswrapper[4687]: E0312 17:30:46.734553 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:30:53 crc kubenswrapper[4687]: I0312 17:30:53.977989 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4k4w"] Mar 12 17:30:53 crc kubenswrapper[4687]: E0312 17:30:53.979266 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ddaed4-cab5-42c8-9580-16e3cc14a279" containerName="oc" Mar 12 17:30:53 crc kubenswrapper[4687]: I0312 17:30:53.979287 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ddaed4-cab5-42c8-9580-16e3cc14a279" containerName="oc" Mar 12 17:30:53 crc kubenswrapper[4687]: E0312 17:30:53.979337 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9702413d-18d6-4914-9133-503ad30e264e" containerName="collect-profiles" Mar 12 17:30:53 crc kubenswrapper[4687]: I0312 17:30:53.979345 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9702413d-18d6-4914-9133-503ad30e264e" containerName="collect-profiles" Mar 12 17:30:53 crc kubenswrapper[4687]: I0312 17:30:53.979724 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ddaed4-cab5-42c8-9580-16e3cc14a279" containerName="oc" Mar 12 17:30:53 crc kubenswrapper[4687]: I0312 17:30:53.979757 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9702413d-18d6-4914-9133-503ad30e264e" containerName="collect-profiles" Mar 12 17:30:53 crc kubenswrapper[4687]: I0312 17:30:53.982146 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.001530 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4w"] Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.080595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-utilities\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.080690 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2sw\" (UniqueName: \"kubernetes.io/projected/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-kube-api-access-ml2sw\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.080735 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-catalog-content\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.183454 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-utilities\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.183765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml2sw\" (UniqueName: \"kubernetes.io/projected/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-kube-api-access-ml2sw\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.183803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-catalog-content\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.183997 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-utilities\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.184273 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-catalog-content\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.213650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml2sw\" (UniqueName: \"kubernetes.io/projected/ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445-kube-api-access-ml2sw\") pod \"certified-operators-z4k4w\" (UID: \"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445\") " pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.372814 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:30:54 crc kubenswrapper[4687]: I0312 17:30:54.872563 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4w"] Mar 12 17:30:55 crc kubenswrapper[4687]: I0312 17:30:55.092443 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4w" event={"ID":"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445","Type":"ContainerStarted","Data":"27c2ca1aac7ad7ed6c78f0be3d31ecdf3a31452434988c548f9a202d4aeca1b7"} Mar 12 17:30:56 crc kubenswrapper[4687]: I0312 17:30:56.104003 4687 generic.go:334] "Generic (PLEG): container finished" podID="ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445" containerID="040ddbdce3b81d07255243e9fd99ecb91a047c625c0f06227f3c39abcc5e785c" exitCode=0 Mar 12 17:30:56 crc kubenswrapper[4687]: I0312 17:30:56.104270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4w" event={"ID":"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445","Type":"ContainerDied","Data":"040ddbdce3b81d07255243e9fd99ecb91a047c625c0f06227f3c39abcc5e785c"} Mar 12 17:30:59 crc kubenswrapper[4687]: I0312 17:30:59.733673 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:30:59 crc kubenswrapper[4687]: E0312 17:30:59.734960 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:30:59 crc kubenswrapper[4687]: I0312 17:30:59.844019 4687 scope.go:117] "RemoveContainer" containerID="74c6b95ca46e5f1b91aa5d5ddaa8e11e8ac2690a9b1cdd4aa40768fa7889a90a" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.553794 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dsfbv"] Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.556689 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.570713 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dsfbv"] Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.671049 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-utilities\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.671517 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-catalog-content\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.671822 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rq2\" (UniqueName: \"kubernetes.io/projected/baf59fd9-b66d-4650-ad0b-ac38814557bc-kube-api-access-f6rq2\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.774487 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6rq2\" (UniqueName: \"kubernetes.io/projected/baf59fd9-b66d-4650-ad0b-ac38814557bc-kube-api-access-f6rq2\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.774572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-utilities\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.774752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-catalog-content\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.775098 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-utilities\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.775528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-catalog-content\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.783651 4687 scope.go:117] "RemoveContainer" containerID="12504da925e887ec22304f5e95529431f37217a6461300acb20b92f113134997" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.798033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6rq2\" (UniqueName: \"kubernetes.io/projected/baf59fd9-b66d-4650-ad0b-ac38814557bc-kube-api-access-f6rq2\") pod \"redhat-operators-dsfbv\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:01 crc kubenswrapper[4687]: I0312 17:31:01.883632 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:03 crc kubenswrapper[4687]: I0312 17:31:03.069441 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dsfbv"] Mar 12 17:31:03 crc kubenswrapper[4687]: I0312 17:31:03.248751 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerStarted","Data":"38b399fb09658112aff8f3ef442b37cc3027dce87379cf3a72471bb0fe605c81"} Mar 12 17:31:03 crc kubenswrapper[4687]: I0312 17:31:03.251107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4w" event={"ID":"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445","Type":"ContainerStarted","Data":"6f42a919c76ab5f81ac75965998321893b68f2e174cfd07cb1b40a5c988248ba"} Mar 12 17:31:04 crc kubenswrapper[4687]: I0312 17:31:04.265679 4687 generic.go:334] "Generic (PLEG): container finished" podID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerID="83aa70c19dd47fee6149f10f4e022be3298e5af13d67cdea812d6bb4a6f424fc" exitCode=0 Mar 12 17:31:04 crc kubenswrapper[4687]: I0312 17:31:04.265744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerDied","Data":"83aa70c19dd47fee6149f10f4e022be3298e5af13d67cdea812d6bb4a6f424fc"} Mar 12 17:31:04 crc kubenswrapper[4687]: I0312 17:31:04.268802 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 17:31:05 crc kubenswrapper[4687]: I0312 17:31:05.280659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerStarted","Data":"1ef3158fe0639fb99f3c184a3e27b855085201c54e684db3dae219f9111b4656"} Mar 12 17:31:06 crc kubenswrapper[4687]: I0312 17:31:06.295200 4687 generic.go:334] "Generic (PLEG): container finished" podID="ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445" containerID="6f42a919c76ab5f81ac75965998321893b68f2e174cfd07cb1b40a5c988248ba" exitCode=0 Mar 12 17:31:06 crc kubenswrapper[4687]: I0312 17:31:06.295591 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4w" event={"ID":"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445","Type":"ContainerDied","Data":"6f42a919c76ab5f81ac75965998321893b68f2e174cfd07cb1b40a5c988248ba"} Mar 12 17:31:07 crc kubenswrapper[4687]: I0312 17:31:07.317224 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4k4w" event={"ID":"ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445","Type":"ContainerStarted","Data":"379ba91d6e22b66b542a968fabd75986dd050d0bd32281b8b08ef0fc76e203aa"} Mar 12 17:31:07 crc kubenswrapper[4687]: I0312 17:31:07.338152 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4k4w" podStartSLOduration=3.642382123 podStartE2EDuration="14.338133655s" podCreationTimestamp="2026-03-12 17:30:53 +0000 UTC" firstStartedPulling="2026-03-12 17:30:56.106398389 +0000 UTC m=+5305.070360733" lastFinishedPulling="2026-03-12 17:31:06.802149911 +0000 UTC m=+5315.766112265" observedRunningTime="2026-03-12 17:31:07.33574308 +0000 UTC m=+5316.299705434" watchObservedRunningTime="2026-03-12 17:31:07.338133655 +0000 UTC m=+5316.302095999" Mar 12 17:31:10 crc kubenswrapper[4687]: I0312 17:31:10.733213 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:31:10 crc kubenswrapper[4687]: E0312 17:31:10.734183 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bxjh2_openshift-machine-config-operator(a785ed51-b59b-4ec7-b31c-a66279b9151c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" Mar 12 17:31:11 crc kubenswrapper[4687]: I0312 17:31:11.371478 4687 generic.go:334] "Generic (PLEG): container finished" podID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerID="1ef3158fe0639fb99f3c184a3e27b855085201c54e684db3dae219f9111b4656" exitCode=0 Mar 12 17:31:11 crc kubenswrapper[4687]: I0312 17:31:11.371538 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerDied","Data":"1ef3158fe0639fb99f3c184a3e27b855085201c54e684db3dae219f9111b4656"} Mar 12 17:31:12 crc kubenswrapper[4687]: I0312 17:31:12.385535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerStarted","Data":"027479435bdc3996bc152a08e84a2a11311d0f66d720e0c1c09300793b6a7d9c"} Mar 12 17:31:12 crc kubenswrapper[4687]: I0312 17:31:12.416903 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dsfbv" podStartSLOduration=3.866036141 podStartE2EDuration="11.416884019s" podCreationTimestamp="2026-03-12 17:31:01 +0000 UTC" firstStartedPulling="2026-03-12 17:31:04.268320602 +0000 UTC m=+5313.232282956" lastFinishedPulling="2026-03-12 17:31:11.81916849 +0000 UTC m=+5320.783130834" observedRunningTime="2026-03-12 17:31:12.409547078 +0000 UTC m=+5321.373509472" watchObservedRunningTime="2026-03-12 17:31:12.416884019 +0000 UTC m=+5321.380846363" Mar 12 17:31:14 crc kubenswrapper[4687]: I0312 17:31:14.374256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:31:14 crc kubenswrapper[4687]: I0312 17:31:14.374740 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:31:15 crc kubenswrapper[4687]: I0312 17:31:15.433979 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z4k4w" podUID="ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445" containerName="registry-server" probeResult="failure" output=< Mar 12 17:31:15 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:31:15 crc kubenswrapper[4687]: > Mar 12 17:31:21 crc kubenswrapper[4687]: I0312 17:31:21.884265 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:21 crc kubenswrapper[4687]: I0312 17:31:21.884907 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:31:22 crc kubenswrapper[4687]: I0312 17:31:22.732677 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:31:22 crc kubenswrapper[4687]: I0312 17:31:22.943449 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dsfbv" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" probeResult="failure" output=< Mar 12 17:31:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:31:22 crc kubenswrapper[4687]: > Mar 12 17:31:23 crc kubenswrapper[4687]: I0312 17:31:23.506317 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"21eb5c22068ba5b2c66e634115e8ac1a5e962ccbc13fb033dac3d8aa1bc1156f"} Mar 12 17:31:25 crc kubenswrapper[4687]: I0312 17:31:25.430146 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z4k4w" podUID="ca53f3f6-38ee-4c0e-a8a9-a4ee642c2445" containerName="registry-server" probeResult="failure" output=< Mar 12 17:31:25 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:31:25 crc kubenswrapper[4687]: > Mar 12 17:31:32 crc kubenswrapper[4687]: I0312 17:31:32.944258 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dsfbv" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" probeResult="failure" output=< Mar 12 17:31:32 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:31:32 crc kubenswrapper[4687]: > Mar 12 17:31:34 crc kubenswrapper[4687]: I0312 17:31:34.426850 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:31:34 crc kubenswrapper[4687]: I0312 17:31:34.486032 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4k4w" Mar 12 17:31:34 crc kubenswrapper[4687]: I0312 17:31:34.578317 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4k4w"] Mar 12 17:31:34 crc kubenswrapper[4687]: I0312 17:31:34.683340 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2wpn"] Mar 12 17:31:34 crc kubenswrapper[4687]: I0312 17:31:34.683631 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z2wpn" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" containerID="cri-o://f1693940ab2b78af019a9bf4c1175535442dd66762f217cbb6b26da157373477" gracePeriod=2 Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.658693 4687 generic.go:334] "Generic (PLEG): container finished" podID="dea71408-c307-4443-a026-547ca7196ff6" containerID="f1693940ab2b78af019a9bf4c1175535442dd66762f217cbb6b26da157373477" exitCode=0 Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.659014 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerDied","Data":"f1693940ab2b78af019a9bf4c1175535442dd66762f217cbb6b26da157373477"} Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.800508 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.871876 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shznq\" (UniqueName: \"kubernetes.io/projected/dea71408-c307-4443-a026-547ca7196ff6-kube-api-access-shznq\") pod \"dea71408-c307-4443-a026-547ca7196ff6\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.871980 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-catalog-content\") pod \"dea71408-c307-4443-a026-547ca7196ff6\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.872123 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-utilities\") pod \"dea71408-c307-4443-a026-547ca7196ff6\" (UID: \"dea71408-c307-4443-a026-547ca7196ff6\") " Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.876396 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-utilities" (OuterVolumeSpecName: "utilities") pod "dea71408-c307-4443-a026-547ca7196ff6" (UID: "dea71408-c307-4443-a026-547ca7196ff6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.890075 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea71408-c307-4443-a026-547ca7196ff6-kube-api-access-shznq" (OuterVolumeSpecName: "kube-api-access-shznq") pod "dea71408-c307-4443-a026-547ca7196ff6" (UID: "dea71408-c307-4443-a026-547ca7196ff6"). InnerVolumeSpecName "kube-api-access-shznq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.977236 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shznq\" (UniqueName: \"kubernetes.io/projected/dea71408-c307-4443-a026-547ca7196ff6-kube-api-access-shznq\") on node \"crc\" DevicePath \"\"" Mar 12 17:31:35 crc kubenswrapper[4687]: I0312 17:31:35.977264 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.025675 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dea71408-c307-4443-a026-547ca7196ff6" (UID: "dea71408-c307-4443-a026-547ca7196ff6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.078960 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea71408-c307-4443-a026-547ca7196ff6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.675148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wpn" event={"ID":"dea71408-c307-4443-a026-547ca7196ff6","Type":"ContainerDied","Data":"d08150890bd4836876b824f0f41ca96207105c8ffd8201a28093b4bdf88ba23b"} Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.675204 4687 scope.go:117] "RemoveContainer" containerID="f1693940ab2b78af019a9bf4c1175535442dd66762f217cbb6b26da157373477" Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.676091 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wpn" Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.714610 4687 scope.go:117] "RemoveContainer" containerID="2d423b1dfb8802084cbe56a38bfd1bec04bbb3ef7a1a382a73e879837471f9ea" Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.715478 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2wpn"] Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.727531 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z2wpn"] Mar 12 17:31:36 crc kubenswrapper[4687]: I0312 17:31:36.754830 4687 scope.go:117] "RemoveContainer" containerID="e185f136b839faf6ee841bbfe474dd03947d7af81ccf2b622a141029229b9ddf" Mar 12 17:31:37 crc kubenswrapper[4687]: I0312 17:31:37.750324 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea71408-c307-4443-a026-547ca7196ff6" path="/var/lib/kubelet/pods/dea71408-c307-4443-a026-547ca7196ff6/volumes" Mar 12 17:31:42 crc kubenswrapper[4687]: I0312 17:31:42.958453 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dsfbv" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" probeResult="failure" output=< Mar 12 17:31:42 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:31:42 crc kubenswrapper[4687]: > Mar 12 17:31:52 crc kubenswrapper[4687]: I0312 17:31:52.965584 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dsfbv" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" probeResult="failure" output=< Mar 12 17:31:52 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Mar 12 17:31:52 crc kubenswrapper[4687]: > Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.174593 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555612-vtz9m"] Mar 12 17:32:00 crc kubenswrapper[4687]: E0312 17:32:00.175988 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="extract-content" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.176005 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="extract-content" Mar 12 17:32:00 crc kubenswrapper[4687]: E0312 17:32:00.176043 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.176051 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" Mar 12 17:32:00 crc kubenswrapper[4687]: E0312 17:32:00.176075 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="extract-utilities" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.176085 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="extract-utilities" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.176416 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea71408-c307-4443-a026-547ca7196ff6" containerName="registry-server" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.177510 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.182714 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.190516 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.193067 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.202497 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555612-vtz9m"] Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.319752 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52w5\" (UniqueName: \"kubernetes.io/projected/529889bb-bf4e-4fed-bac1-e75e9714564d-kube-api-access-f52w5\") pod \"auto-csr-approver-29555612-vtz9m\" (UID: \"529889bb-bf4e-4fed-bac1-e75e9714564d\") " pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.422474 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52w5\" (UniqueName: \"kubernetes.io/projected/529889bb-bf4e-4fed-bac1-e75e9714564d-kube-api-access-f52w5\") pod \"auto-csr-approver-29555612-vtz9m\" (UID: \"529889bb-bf4e-4fed-bac1-e75e9714564d\") " pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.461490 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52w5\" (UniqueName: \"kubernetes.io/projected/529889bb-bf4e-4fed-bac1-e75e9714564d-kube-api-access-f52w5\") pod \"auto-csr-approver-29555612-vtz9m\" (UID: \"529889bb-bf4e-4fed-bac1-e75e9714564d\") " pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:00 crc kubenswrapper[4687]: I0312 17:32:00.525268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:01 crc kubenswrapper[4687]: I0312 17:32:01.949569 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:32:02 crc kubenswrapper[4687]: I0312 17:32:02.026754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:32:02 crc kubenswrapper[4687]: I0312 17:32:02.284873 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555612-vtz9m"] Mar 12 17:32:03 crc kubenswrapper[4687]: I0312 17:32:03.131439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" event={"ID":"529889bb-bf4e-4fed-bac1-e75e9714564d","Type":"ContainerStarted","Data":"f2c7ca2f660c30ed03c8daac23ad20f9d1b601bb6d8bb30ed1d97f811252d5c2"} Mar 12 17:32:03 crc kubenswrapper[4687]: I0312 17:32:03.160856 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dsfbv"] Mar 12 17:32:03 crc kubenswrapper[4687]: I0312 17:32:03.161142 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dsfbv" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" containerID="cri-o://027479435bdc3996bc152a08e84a2a11311d0f66d720e0c1c09300793b6a7d9c" gracePeriod=2 Mar 12 17:32:04 crc kubenswrapper[4687]: I0312 17:32:04.147313 4687 generic.go:334] "Generic (PLEG): container finished" podID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerID="027479435bdc3996bc152a08e84a2a11311d0f66d720e0c1c09300793b6a7d9c" exitCode=0 Mar 12 17:32:04 crc kubenswrapper[4687]: I0312 17:32:04.147564 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerDied","Data":"027479435bdc3996bc152a08e84a2a11311d0f66d720e0c1c09300793b6a7d9c"} Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.117567 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.163593 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsfbv" event={"ID":"baf59fd9-b66d-4650-ad0b-ac38814557bc","Type":"ContainerDied","Data":"38b399fb09658112aff8f3ef442b37cc3027dce87379cf3a72471bb0fe605c81"} Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.163659 4687 scope.go:117] "RemoveContainer" containerID="027479435bdc3996bc152a08e84a2a11311d0f66d720e0c1c09300793b6a7d9c" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.163801 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsfbv" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.187812 4687 scope.go:117] "RemoveContainer" containerID="1ef3158fe0639fb99f3c184a3e27b855085201c54e684db3dae219f9111b4656" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.218994 4687 scope.go:117] "RemoveContainer" containerID="83aa70c19dd47fee6149f10f4e022be3298e5af13d67cdea812d6bb4a6f424fc" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.291293 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-utilities\") pod \"baf59fd9-b66d-4650-ad0b-ac38814557bc\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.291493 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-catalog-content\") pod \"baf59fd9-b66d-4650-ad0b-ac38814557bc\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.291530 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6rq2\" (UniqueName: \"kubernetes.io/projected/baf59fd9-b66d-4650-ad0b-ac38814557bc-kube-api-access-f6rq2\") pod \"baf59fd9-b66d-4650-ad0b-ac38814557bc\" (UID: \"baf59fd9-b66d-4650-ad0b-ac38814557bc\") " Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.292152 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-utilities" (OuterVolumeSpecName: "utilities") pod "baf59fd9-b66d-4650-ad0b-ac38814557bc" (UID: "baf59fd9-b66d-4650-ad0b-ac38814557bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.292416 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.305155 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf59fd9-b66d-4650-ad0b-ac38814557bc-kube-api-access-f6rq2" (OuterVolumeSpecName: "kube-api-access-f6rq2") pod "baf59fd9-b66d-4650-ad0b-ac38814557bc" (UID: "baf59fd9-b66d-4650-ad0b-ac38814557bc"). InnerVolumeSpecName "kube-api-access-f6rq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.402958 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6rq2\" (UniqueName: \"kubernetes.io/projected/baf59fd9-b66d-4650-ad0b-ac38814557bc-kube-api-access-f6rq2\") on node \"crc\" DevicePath \"\"" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.435249 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "baf59fd9-b66d-4650-ad0b-ac38814557bc" (UID: "baf59fd9-b66d-4650-ad0b-ac38814557bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.502244 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dsfbv"] Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.505212 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baf59fd9-b66d-4650-ad0b-ac38814557bc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.513556 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dsfbv"] Mar 12 17:32:05 crc kubenswrapper[4687]: I0312 17:32:05.748757 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" path="/var/lib/kubelet/pods/baf59fd9-b66d-4650-ad0b-ac38814557bc/volumes" Mar 12 17:32:06 crc kubenswrapper[4687]: I0312 17:32:06.178175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" event={"ID":"529889bb-bf4e-4fed-bac1-e75e9714564d","Type":"ContainerStarted","Data":"13d832a65f0632009e4ab55e026475cfb0899a577155a3e8ba1c632a7c0b6207"} Mar 12 17:32:06 crc kubenswrapper[4687]: I0312 17:32:06.207828 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" podStartSLOduration=4.863121789 podStartE2EDuration="6.207797532s" podCreationTimestamp="2026-03-12 17:32:00 +0000 UTC" firstStartedPulling="2026-03-12 17:32:02.294303104 +0000 UTC m=+5371.258265448" lastFinishedPulling="2026-03-12 17:32:03.638978847 +0000 UTC m=+5372.602941191" observedRunningTime="2026-03-12 17:32:06.197181832 +0000 UTC m=+5375.161144226" watchObservedRunningTime="2026-03-12 17:32:06.207797532 +0000 UTC m=+5375.171759916" Mar 12 17:32:07 crc kubenswrapper[4687]: I0312 17:32:07.196842 4687 generic.go:334] "Generic (PLEG): container finished" podID="529889bb-bf4e-4fed-bac1-e75e9714564d" containerID="13d832a65f0632009e4ab55e026475cfb0899a577155a3e8ba1c632a7c0b6207" exitCode=0 Mar 12 17:32:07 crc kubenswrapper[4687]: I0312 17:32:07.196929 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" event={"ID":"529889bb-bf4e-4fed-bac1-e75e9714564d","Type":"ContainerDied","Data":"13d832a65f0632009e4ab55e026475cfb0899a577155a3e8ba1c632a7c0b6207"} Mar 12 17:32:08 crc kubenswrapper[4687]: I0312 17:32:08.667181 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:08 crc kubenswrapper[4687]: I0312 17:32:08.786355 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52w5\" (UniqueName: \"kubernetes.io/projected/529889bb-bf4e-4fed-bac1-e75e9714564d-kube-api-access-f52w5\") pod \"529889bb-bf4e-4fed-bac1-e75e9714564d\" (UID: \"529889bb-bf4e-4fed-bac1-e75e9714564d\") " Mar 12 17:32:08 crc kubenswrapper[4687]: I0312 17:32:08.793729 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529889bb-bf4e-4fed-bac1-e75e9714564d-kube-api-access-f52w5" (OuterVolumeSpecName: "kube-api-access-f52w5") pod "529889bb-bf4e-4fed-bac1-e75e9714564d" (UID: "529889bb-bf4e-4fed-bac1-e75e9714564d"). InnerVolumeSpecName "kube-api-access-f52w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:32:08 crc kubenswrapper[4687]: I0312 17:32:08.890405 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52w5\" (UniqueName: \"kubernetes.io/projected/529889bb-bf4e-4fed-bac1-e75e9714564d-kube-api-access-f52w5\") on node \"crc\" DevicePath \"\"" Mar 12 17:32:09 crc kubenswrapper[4687]: I0312 17:32:09.242893 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" event={"ID":"529889bb-bf4e-4fed-bac1-e75e9714564d","Type":"ContainerDied","Data":"f2c7ca2f660c30ed03c8daac23ad20f9d1b601bb6d8bb30ed1d97f811252d5c2"} Mar 12 17:32:09 crc kubenswrapper[4687]: I0312 17:32:09.243420 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2c7ca2f660c30ed03c8daac23ad20f9d1b601bb6d8bb30ed1d97f811252d5c2" Mar 12 17:32:09 crc kubenswrapper[4687]: I0312 17:32:09.243536 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555612-vtz9m" Mar 12 17:32:09 crc kubenswrapper[4687]: I0312 17:32:09.315920 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555606-gbx7n"] Mar 12 17:32:09 crc kubenswrapper[4687]: I0312 17:32:09.327482 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555606-gbx7n"] Mar 12 17:32:09 crc kubenswrapper[4687]: I0312 17:32:09.753664 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96eb9717-bb38-4f57-aa14-f7001a0d94d9" path="/var/lib/kubelet/pods/96eb9717-bb38-4f57-aa14-f7001a0d94d9/volumes" Mar 12 17:33:03 crc kubenswrapper[4687]: I0312 17:33:03.664128 4687 scope.go:117] "RemoveContainer" containerID="280fd34adf458c7380ab119d4a5a1a7dc30c42565ebabb5b7c4c58244585f2d7" Mar 12 17:33:44 crc kubenswrapper[4687]: I0312 17:33:44.121484 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:33:44 crc kubenswrapper[4687]: I0312 17:33:44.122248 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.158994 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555614-7txsm"] Mar 12 17:34:00 crc kubenswrapper[4687]: E0312 17:34:00.160259 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.160282 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" Mar 12 17:34:00 crc kubenswrapper[4687]: E0312 17:34:00.160316 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="extract-utilities" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.160330 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="extract-utilities" Mar 12 17:34:00 crc kubenswrapper[4687]: E0312 17:34:00.160423 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529889bb-bf4e-4fed-bac1-e75e9714564d" containerName="oc" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.160438 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="529889bb-bf4e-4fed-bac1-e75e9714564d" containerName="oc" Mar 12 17:34:00 crc kubenswrapper[4687]: E0312 17:34:00.160468 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="extract-content" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.160483 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="extract-content" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.160921 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf59fd9-b66d-4650-ad0b-ac38814557bc" containerName="registry-server" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.160989 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="529889bb-bf4e-4fed-bac1-e75e9714564d" containerName="oc" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.162421 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.165118 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.165482 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.166072 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.173489 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555614-7txsm"] Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.273125 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xk4\" (UniqueName: \"kubernetes.io/projected/21922ff1-f57e-43a3-ad49-df7214efb32f-kube-api-access-s5xk4\") pod \"auto-csr-approver-29555614-7txsm\" (UID: \"21922ff1-f57e-43a3-ad49-df7214efb32f\") " pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.375984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5xk4\" (UniqueName: \"kubernetes.io/projected/21922ff1-f57e-43a3-ad49-df7214efb32f-kube-api-access-s5xk4\") pod \"auto-csr-approver-29555614-7txsm\" (UID: \"21922ff1-f57e-43a3-ad49-df7214efb32f\") " pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.402081 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5xk4\" (UniqueName: \"kubernetes.io/projected/21922ff1-f57e-43a3-ad49-df7214efb32f-kube-api-access-s5xk4\") pod \"auto-csr-approver-29555614-7txsm\" (UID: \"21922ff1-f57e-43a3-ad49-df7214efb32f\") " pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:00 crc kubenswrapper[4687]: I0312 17:34:00.497943 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:01 crc kubenswrapper[4687]: I0312 17:34:01.035853 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555614-7txsm"] Mar 12 17:34:01 crc kubenswrapper[4687]: I0312 17:34:01.877824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555614-7txsm" event={"ID":"21922ff1-f57e-43a3-ad49-df7214efb32f","Type":"ContainerStarted","Data":"749edd73dd9163c477c59797a8cd259bb34abcdd4d259ebe273d14e35bc003ef"} Mar 12 17:34:02 crc kubenswrapper[4687]: I0312 17:34:02.893916 4687 generic.go:334] "Generic (PLEG): container finished" podID="21922ff1-f57e-43a3-ad49-df7214efb32f" containerID="e466587cfb1d4ca630e25b2e0bffe52f7bac717ff58fb1e6d2a5abb00c20bc3d" exitCode=0 Mar 12 17:34:02 crc kubenswrapper[4687]: I0312 17:34:02.893987 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555614-7txsm" event={"ID":"21922ff1-f57e-43a3-ad49-df7214efb32f","Type":"ContainerDied","Data":"e466587cfb1d4ca630e25b2e0bffe52f7bac717ff58fb1e6d2a5abb00c20bc3d"} Mar 12 17:34:04 crc kubenswrapper[4687]: I0312 17:34:04.859847 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" podUID="63f537cc-6a26-4a05-9b17-80549297e9f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:34:04 crc kubenswrapper[4687]: I0312 17:34:04.860695 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-65vx5" podUID="63f537cc-6a26-4a05-9b17-80549297e9f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 17:34:04 crc kubenswrapper[4687]: I0312 17:34:04.862100 4687 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.1.78:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 17:34:04 crc kubenswrapper[4687]: I0312 17:34:04.862145 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2dc7c2f7-1478-4385-be1a-a2257e4dc2d3" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.1.78:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.306052 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.439843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5xk4\" (UniqueName: \"kubernetes.io/projected/21922ff1-f57e-43a3-ad49-df7214efb32f-kube-api-access-s5xk4\") pod \"21922ff1-f57e-43a3-ad49-df7214efb32f\" (UID: \"21922ff1-f57e-43a3-ad49-df7214efb32f\") " Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.445036 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21922ff1-f57e-43a3-ad49-df7214efb32f-kube-api-access-s5xk4" (OuterVolumeSpecName: "kube-api-access-s5xk4") pod "21922ff1-f57e-43a3-ad49-df7214efb32f" (UID: "21922ff1-f57e-43a3-ad49-df7214efb32f"). InnerVolumeSpecName "kube-api-access-s5xk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.543303 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5xk4\" (UniqueName: \"kubernetes.io/projected/21922ff1-f57e-43a3-ad49-df7214efb32f-kube-api-access-s5xk4\") on node \"crc\" DevicePath \"\"" Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.957303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555614-7txsm" event={"ID":"21922ff1-f57e-43a3-ad49-df7214efb32f","Type":"ContainerDied","Data":"749edd73dd9163c477c59797a8cd259bb34abcdd4d259ebe273d14e35bc003ef"} Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.957665 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="749edd73dd9163c477c59797a8cd259bb34abcdd4d259ebe273d14e35bc003ef" Mar 12 17:34:05 crc kubenswrapper[4687]: I0312 17:34:05.957348 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555614-7txsm" Mar 12 17:34:06 crc kubenswrapper[4687]: I0312 17:34:06.403914 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555608-v6vfr"] Mar 12 17:34:06 crc kubenswrapper[4687]: I0312 17:34:06.414706 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555608-v6vfr"] Mar 12 17:34:07 crc kubenswrapper[4687]: I0312 17:34:07.752864 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e2e938-d63e-4c9c-afdf-664d4812f648" path="/var/lib/kubelet/pods/01e2e938-d63e-4c9c-afdf-664d4812f648/volumes" Mar 12 17:34:14 crc kubenswrapper[4687]: I0312 17:34:14.121680 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:34:14 crc kubenswrapper[4687]: I0312 17:34:14.122488 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.122167 4687 patch_prober.go:28] interesting pod/machine-config-daemon-bxjh2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.122974 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.123067 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.125253 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21eb5c22068ba5b2c66e634115e8ac1a5e962ccbc13fb033dac3d8aa1bc1156f"} pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.125634 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" podUID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerName="machine-config-daemon" containerID="cri-o://21eb5c22068ba5b2c66e634115e8ac1a5e962ccbc13fb033dac3d8aa1bc1156f" gracePeriod=600 Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.506745 4687 generic.go:334] "Generic (PLEG): container finished" podID="a785ed51-b59b-4ec7-b31c-a66279b9151c" containerID="21eb5c22068ba5b2c66e634115e8ac1a5e962ccbc13fb033dac3d8aa1bc1156f" exitCode=0 Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.506810 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerDied","Data":"21eb5c22068ba5b2c66e634115e8ac1a5e962ccbc13fb033dac3d8aa1bc1156f"} Mar 12 17:34:44 crc kubenswrapper[4687]: I0312 17:34:44.507155 4687 scope.go:117] "RemoveContainer" containerID="d05ef570a88632970dbaec4e58e0be035ff78d9f3b4197d4929d10d60010bf7e" Mar 12 17:34:45 crc kubenswrapper[4687]: I0312 17:34:45.519253 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bxjh2" event={"ID":"a785ed51-b59b-4ec7-b31c-a66279b9151c","Type":"ContainerStarted","Data":"356a3578981cafcdf881b0e6b092d227919b91faba62d093d3226b0813824ea0"} Mar 12 17:35:03 crc kubenswrapper[4687]: I0312 17:35:03.842701 4687 scope.go:117] "RemoveContainer" containerID="9940c458bdb82a61e5e5710c2bd29192b0bc7419e3a26c2ad5d784a2a3ffc80f" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.154222 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555616-x69r7"] Mar 12 17:36:00 crc kubenswrapper[4687]: E0312 17:36:00.155450 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21922ff1-f57e-43a3-ad49-df7214efb32f" containerName="oc" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.155468 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="21922ff1-f57e-43a3-ad49-df7214efb32f" containerName="oc" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.155798 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="21922ff1-f57e-43a3-ad49-df7214efb32f" containerName="oc" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.156854 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.163935 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.164061 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.164232 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-fkmzs" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.168196 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555616-x69r7"] Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.242106 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdwq\" (UniqueName: \"kubernetes.io/projected/9e632de2-f0de-4f2e-b512-d395a628fb22-kube-api-access-gkdwq\") pod \"auto-csr-approver-29555616-x69r7\" (UID: \"9e632de2-f0de-4f2e-b512-d395a628fb22\") " pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.344026 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdwq\" (UniqueName: \"kubernetes.io/projected/9e632de2-f0de-4f2e-b512-d395a628fb22-kube-api-access-gkdwq\") pod \"auto-csr-approver-29555616-x69r7\" (UID: \"9e632de2-f0de-4f2e-b512-d395a628fb22\") " pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:00 crc kubenswrapper[4687]: I0312 17:36:00.962737 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdwq\" (UniqueName: \"kubernetes.io/projected/9e632de2-f0de-4f2e-b512-d395a628fb22-kube-api-access-gkdwq\") pod \"auto-csr-approver-29555616-x69r7\" (UID: \"9e632de2-f0de-4f2e-b512-d395a628fb22\") " pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:01 crc kubenswrapper[4687]: I0312 17:36:01.085822 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:01 crc kubenswrapper[4687]: I0312 17:36:01.631718 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555616-x69r7"] Mar 12 17:36:01 crc kubenswrapper[4687]: W0312 17:36:01.633948 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e632de2_f0de_4f2e_b512_d395a628fb22.slice/crio-728869344be471652f6de3f605eda787ea420404fae90d9c85d695d0cb39ff15 WatchSource:0}: Error finding container 728869344be471652f6de3f605eda787ea420404fae90d9c85d695d0cb39ff15: Status 404 returned error can't find the container with id 728869344be471652f6de3f605eda787ea420404fae90d9c85d695d0cb39ff15 Mar 12 17:36:02 crc kubenswrapper[4687]: I0312 17:36:02.544456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555616-x69r7" event={"ID":"9e632de2-f0de-4f2e-b512-d395a628fb22","Type":"ContainerStarted","Data":"728869344be471652f6de3f605eda787ea420404fae90d9c85d695d0cb39ff15"} Mar 12 17:36:04 crc kubenswrapper[4687]: I0312 17:36:04.882206 4687 generic.go:334] "Generic (PLEG): container finished" podID="9e632de2-f0de-4f2e-b512-d395a628fb22" containerID="28b19485d14ff4d84aedf5f0b75e09fca960d3cb16bdbfd3c18866593ff9d749" exitCode=0 Mar 12 17:36:04 crc kubenswrapper[4687]: I0312 17:36:04.882498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555616-x69r7" event={"ID":"9e632de2-f0de-4f2e-b512-d395a628fb22","Type":"ContainerDied","Data":"28b19485d14ff4d84aedf5f0b75e09fca960d3cb16bdbfd3c18866593ff9d749"} Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.343132 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.484283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkdwq\" (UniqueName: \"kubernetes.io/projected/9e632de2-f0de-4f2e-b512-d395a628fb22-kube-api-access-gkdwq\") pod \"9e632de2-f0de-4f2e-b512-d395a628fb22\" (UID: \"9e632de2-f0de-4f2e-b512-d395a628fb22\") " Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.493750 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e632de2-f0de-4f2e-b512-d395a628fb22-kube-api-access-gkdwq" (OuterVolumeSpecName: "kube-api-access-gkdwq") pod "9e632de2-f0de-4f2e-b512-d395a628fb22" (UID: "9e632de2-f0de-4f2e-b512-d395a628fb22"). InnerVolumeSpecName "kube-api-access-gkdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.588762 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkdwq\" (UniqueName: \"kubernetes.io/projected/9e632de2-f0de-4f2e-b512-d395a628fb22-kube-api-access-gkdwq\") on node \"crc\" DevicePath \"\"" Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.908113 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555616-x69r7" event={"ID":"9e632de2-f0de-4f2e-b512-d395a628fb22","Type":"ContainerDied","Data":"728869344be471652f6de3f605eda787ea420404fae90d9c85d695d0cb39ff15"} Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.908182 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728869344be471652f6de3f605eda787ea420404fae90d9c85d695d0cb39ff15" Mar 12 17:36:06 crc kubenswrapper[4687]: I0312 17:36:06.908162 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555616-x69r7" Mar 12 17:36:07 crc kubenswrapper[4687]: I0312 17:36:07.417643 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555610-wb2kc"] Mar 12 17:36:07 crc kubenswrapper[4687]: I0312 17:36:07.430341 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555610-wb2kc"] Mar 12 17:36:07 crc kubenswrapper[4687]: I0312 17:36:07.745645 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ddaed4-cab5-42c8-9580-16e3cc14a279" path="/var/lib/kubelet/pods/f1ddaed4-cab5-42c8-9580-16e3cc14a279/volumes"